statistical modelling

58
Statistical modelling Gil McVean, Department of Statistics Tuesday 24 th Jan 2012

Upload: eshana

Post on 22-Feb-2016

74 views

Category:

Documents


0 download

DESCRIPTION

Statistical modelling. Gil McVean, Department of Statistics Tuesday 24 th Jan 2012. Questions to ask…. How can we use models in statistical inference? Where do commonly-used models come from? What is a likelihood? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Statistical modelling

Statistical modelling

Gil McVean, Department of StatisticsTuesday 24th Jan 2012

Page 2: Statistical modelling

Questions to ask…

• How can we use models in statistical inference?

• Where do commonly-used models come from?

• What is a likelihood?

• How can we use likelihood to estimate parameters, measure certainty and test hypotheses?

• How do we add covariates to simple models?

• How do you construct and use linear models?

• What’s so general about generalised linear modelling?

2

Page 3: Statistical modelling

3

What is a random variable?

• A random variable is a number associated with the outcome of a stochastic process– Waiting time for next bus– Average number hours sunshine in May– Age of current prime-minister

• In statistics, we want to take observations of random variables and use this to make statements about the underlying stochastic process

– Did this vaccine have any effect?– Which genes contribute to disease susceptibility?– Will it rain tomorrow?

• Parametric models provide much power in the analysis of variation (parameter estimation, hypothesis testing, model choice, prediction)

– Statistical models of the random variables– Models of the underlying stochastic process

Page 4: Statistical modelling

4

What is a distribution?

• A distribution characterises the probability (mass) associated with each possible outcome of a stochastic process

• Distributions of discrete data characterised by probability mass functions

• Distributions of continuous data are characterised by probability density functions (pdf)

• For RVs that map to the integers or the real numbers, the cumulative density function (cdf) is a useful alternative representation

1)( x

xXP

1)(

dxxf

)( xXP

x

)(xf

x

0 1 2 3

Page 5: Statistical modelling

5

Expectations and variances

• Suppose we took a large sample from a particular distribution, we might want to summarise something about what observations look like ‘on average’ and how much variability there is

• The expectation of a distribution is the average value of a random variable over a large number of samples

• The variance of a distribution is the average squared difference between randomly sampled observations and the expected value

x

dxxxfxXxPXE )(or )()(

x

dxxfxExxXPxExXVar )()(or )()()( 22

Page 6: Statistical modelling

6

iid

• In most cases, we assume that the random variables we observe are independent and identically distributed

• The iid assumption allows us to make all sorts of statements both about what we expect to see and how much variation to expect

• Suppose X, Y and Z are iid random variables and a and b are constants

)(3)()()()( XEZEYEXEZYXE

)(Var3)(Var)(Var)(Var)(Var XZYXZYX

bXaEbaXE )()(

)(Var)(Var 2 XabaX

)(VarVar 11 XX ni

in

Page 7: Statistical modelling

7

Where do ‘commonly-used’ distributions come from?

• At the core of much statistical theory and methodology lie a series of key distributions (e.g. Normal, Poisson, Exponential, etc.)

• These distributions are closely related to each other and can be ‘derived’ as the limit of simple stochastic processes when the random variable can be counted or measured

• In many settings, more complex distributions are constructed from these ‘simple’ distributions

– Ratios: E.g. Beta, Cauchy– Compound: E.g. Geometric, Beta– Mixture models

Page 8: Statistical modelling

8

The simplest model

• Bernoulli trials– Outcomes that can take only two values: (0 and 1) with probabilities q and 1 - q

respectively. E.g. coin flipping, indicator functions

• The likelihood function calculates the probability of the data

• What is the probability of observing the sequence (if q = 0.5)– 01001101100111101001?– 11111111111000000000?

• Are they both equally probable?

knk

iixXPP )1()|()|( qqqqX

Page 9: Statistical modelling

9

The binomial distribution

• Often, we don’t care about the exact order in which successes occurred. We might therefore want to ask about the probability of k successes in n trials. This is given by the binomial distribution

• For example, the probability of exactly 3 heads in 4 coins tosses = – P(HHHT)+P(HHTH)+P(HTHH)+P(THHH)– Each order has the same Bernoulli probability = (1/2)4

– There are 4 choose 3 = 4 orders

• Generally, if the probability of success is q, the probability of k successes in n trials

• The expected number of successes is nq and the variance is nq(1-q)

knk

kn

nkP

)1(),|( qqq n = 10

q = 0.2

Page 10: Statistical modelling

10

The geometric distribution

• Bernoulli trials have a memory-less property– The probability of success (X = 1) next time is independent of the number of successes

in the preceding trials

• The number of trials between subsequent successes takes a geometric distribution– The probability that the first success occurs at the kth trial

• You can expect to wait an average of 1/q trials for a success, but the variance is

1)1()|( kkP qqq

2

1)(Varq

qk

q = 0.5 q = 0.05

20 1000

Page 11: Statistical modelling

11

The Poisson distribution

• The Poisson distribution is often used to model ‘rare events’

• It can be derived in two ways– The limit of the Binomial distribution as q→0 and n→∞ (nq = m)– The number of events observed in a given time for a Poisson process (more later)

• It is parameterised by the expected number of events = m– The probability of k events is

• The expected number of events is m, and the variance is also m

• For large m, the Poisson is well approximated by the normal distribution

!)|(

kekP

kmmm

red = Poisson(5)blue = bin(100,0.05)

Page 12: Statistical modelling

12

Going continuous

• In many situations while the outcome space of random variables may really be discrete (or at least measurably discrete), it is convenient to allow the random variables to be continuously distributed

• For example, the distribution of height in mm is actually discrete, but is well approximated by a continuous distribution (e.g. normal)

• Commonly-used continuous distributions arise as the limit of discrete processes

Page 13: Statistical modelling

13

The Poisson process

• Consider a process when in every unit of time some event might occur

• E.g. every generation there is some chance of a gene mutating (with probability of approx 1 in 100,000 )

• The probability of exactly one change in a sufficiently small interval h ≡ 1/n is P = vh ≡ v/n, where P is the probability of one change and n is the number of trials.

• The probability of two or more changes in a sufficiently small interval h is essentially 0

• In the limit of the number of trials becoming large the total number of events (e.g. mutations) follows the Poisson distribution

Time

hh

Page 14: Statistical modelling

14

The exponential distribution

• In the Poisson process, the time between successive events follows an exponential distribution

– This is the continuous analogue of the geometric distribution– It is memory-less. i.e. f(x + t | X > t) = f(x)

x

f(x) xexf )|( 2/1)(Var

/1)(

x

xE

Page 15: Statistical modelling

15

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

0 0.5 1 1.5 2 2.5 3

The gamma distribution

• The gamma distribution arises naturally as the distribution of a series of iid random exponential variables

• The gamma distribution has expectation a/b and variance a/b2

• More generally, a need not be an integer (for example, the Chi-square distribution with one degree of freedom is a Gamma(½, ½) distribution)

) ,(Gamma~

)(~

21

nSXXXS

ExpX

n

xexxf baa

abba

1

)(),|(

a = b = 0.5

a = b = 2

a = b = 10

Page 16: Statistical modelling

16

The beta distribution

• The beta distribution models random variables that take the value [0,1]

• It arises naturally as the proportional ratio of two gamma distributed random variables

• The expectation is a/(a + b)

• In Bayesian statistics, the beta distribution is the natural prior for binomial proportions (beta-binomial)

– The Dirichlet distribution generalises the beta to more than 2 proportions

0

1

2

3

4

5

6

7

8

9

10

0 0.2 0.4 0.6 0.8 1

a = b = 0.5

a = b = 2

a = b = 10),(Beta~

),(Gamma~),(Gamma~

21

2

1

aa

qaqa

YXX

YX

11 )1()()()(),|(

ba

bababa xxxf

Page 17: Statistical modelling

17

The normal distribution

• The normal distribution is related to most distributions through the central limit theorem

• The normal distribution naturally describes variation of characters influenced by a large number of processes (height, weight) or the distribution of large numbers of events (e.g. limit of binomial with large np or Poisson with large m)

2

2)(21exp

21),;(

m

m xxf

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

50 100 150

blue = Poiss(100)red = N(100,10)

Page 18: Statistical modelling

18

The exponential family of distributions

• Many of the distributions covered (e.g. normal, binomial, Poisson, gamma) belong to the exponential family of probability distributions

• a k-parameter member of the family has a density or frequency function of the form

• E.g. the Bernoulli distribution (x = 0 or 1) is

• Such distributions have the useful property that simple functions of the data, T(x), contain all the information about model parameter

– E.g. in Bernoulli case T(x) = x

k

iii xSdxTcxf

1

)()()()(exp);( qqq

)1ln(

1lnexp)1()( 1 q

qqqq xxXP xx

Page 19: Statistical modelling

Estimating parameters from data

19

Page 20: Statistical modelling

Parameter estimation

• We can formulate most questions in statistics in terms of making statements about underlying parameters

• We want to devise a framework for estimating those parameters and making statements about our certainty

• There are several different approaches to making such statements– Moment estimators– Likelihood– Bayesian estimation

20

Page 21: Statistical modelling

Moment estimation

• Moment methods are the simplest way of estimating parameters

• In such techniques parameter values are found that match sample moments (mean, variance, etc.) to those expected

• E.g. for random variables X1, X2, etc. sampled from a N(m,2) distribution

212

1

212

1

1

)(

)(

m

nn

n

iin

n

iin

sE

XE

XXs

XX

21

ˆ

s

X

nn

m

21

Page 22: Statistical modelling

Problems with moment estimation

• It is not always possible to exactly match sample moments with their expectation

• It is not clear when using moment methods how much of the information in the data about the parameters is being used

– Often not much..

• Why should MSE be the best way of measuring the value of an estimator?

22

Page 23: Statistical modelling

Is there an optimal way to estimate parameters?

• For any model the maximum information about model parameters is obtained by considering the likelihood function

• The likelihood function is proportional to the probability of observing the data given a specified parameter value

• One natural choice for point estimation of parameters is the maximum likelihood estimate, the parameter values that maximise the probability of observing the data

• The maximum likelihood estimate (mle) has some useful properties (though is not always optimal in every sense )

23

Page 24: Statistical modelling

An intuitive view on likelihood

1 ,0 2 m

1 ,2 2 m

4 ,0 2 m

24

Page 25: Statistical modelling

An example

• Suppose we have data generated from a Poisson distribution. We want to estimate the parameter of the distribution

• The probability of observing a particular random variable is

• If we have observed a series of iid Poisson RVs we obtain the joint likelihood by multiplying the individual probabilities together

!);(

XeXP

Xmmm

Xnn

i

X

n

XXX

n

eL

eL

Xe

Xe

XeXXXP

i

n

mm

mm

mmmm

m

m

mmm

);(

);(

!!!);,,,(

2121

21

X

X

25

Page 26: Statistical modelling

Comments

• Note in the likelihood function the factorials have disappeared. This is because they provide a constant that does not influence the relative likelihood of different values of the parameter

• It is usual to work with the log likelihood rather than the likelihood. Note that maximising the log likelihood is equivalent to maximising the likelihood

• We can find the mle of the parameter analytically

X

Xnndd

XnneL Xnn

mmm

mmmmm m

ˆ

log);();(

XX

Note that here the mle is the same as the moment estimator

Find where the derivative of the log likelihood is zero

Take the natural log of the likelihood function

26

Page 27: Statistical modelling

Sufficient statistics

• In this example we could write the likelihood as a function of a simple summary of the data – the mean

• This is an example of a sufficient statistic. These are statistics that contain all information about the parameter(s) under the specified model

• For example, support we have a series of iid normal RVs

222

2

2

2/)(212

21

2log);,(

);,(

),;,,,(

2

2)(22

1

22

mmm

m

m

m

m

XXn

eL

eXXXP

n

n

i

Xn

i

iX

i

XX

Mean square Mean 27

Page 28: Statistical modelling

Properties of the maximum likelihood estimate

• The maximum likelihood estimate can be found either analytically or by numerical maximisation

• The mle is consistent in that it converges to the truth as the sample size gets infinitely large

• The mle is asymptotically efficient in that it achieves the minimum possible variance (the Cramér-Rao Lower Bound) as n→∞

• However, the mle is often biased for finite sample sizes– For example, the mle for the variance parameter in a normal distribution is the sample

variance

28

Page 29: Statistical modelling

Comparing parameter estimates

• Obtaining a point estimate of a parameter is just one problem in statistical inference

• We might also like to ask how good different parameter values are

• One way of comparing parameters is through relative likelihood

• For example, suppose we observe counts of 12, 22, 14 and 8 from a Poisson process

• The maximum likelihood estimate is 14. The relative likelihood is given by

Xnne

LL

mm

mm mm

ˆ);ˆ();( )ˆ(

XX

29

Page 30: Statistical modelling

Using relative likelihood

• The relative likelihood and log likelihood surfaces are shown below

30

Page 31: Statistical modelling

Interval estimation

• In most cases the chance that the point estimate you obtain for a parameter is actually the correct one is zero

• We can generalise the idea of point estimation to interval estimation

• Here, rather than estimating a single value of a parameter we estimate a region of parameter space

– We make the inference that the parameter of interest lies within the defined region

• The coverage of an interval estimator is the fraction of times the parameter actually lies within the interval

• The idea of interval estimation is intimately linked to the notion of confidence intervals

31

Page 32: Statistical modelling

Example

• Suppose I’m interested in estimating the mean of a normal distribution with known variance of 1 from a sample of 10 observations

• I construct an interval estimator

• The chart below shows how the coverage properties of this estimator vary with a

aXaX ,q̂

If I choose a to be 0.62 I would have coverage of 95%

32

Page 33: Statistical modelling

Confidence intervals

• It is a short step from here to the notion of confidence intervals

• We find an interval estimator of the parameter that, for any value of the parameter that might be possible, has the desired coverage properties

• We then apply this interval estimator to our observed data to get a confidence interval

• We can guarantee that among repeat performances of the same experiment the true value of the parameter would be in this interval 95% of the time

• We cannot say ”There is a 95% chance of the true parameter being in this interval”

33

Page 34: Statistical modelling

Example – confidence intervals for normal distribution

• Creating confidence intervals for the mean of normal distributions is relatively easy because the coverage properties of interval estimators do not depend on the mean (for a fixed variance)

• For example, the interval estimator below has 95% coverage properties for any mean

• There is an intimate link between confidence intervals and hypothesis testing

na

aXaX

m

96.1

34

Page 35: Statistical modelling

Confidence intervals and likelihood

• Thanks to the central limit theorem there is another useful result that allows us to define confidence intervals from the log-likelihood surface

• Specifically, the set of parameter values for which the log-likelihood is not more than 1.92 less than the maximum likelihood will define a 95% confidence interval

– In the limit of large sample size the LRT is approximately chi-squared distributed under the null

• This is a very useful result, but shouldn’t be assumed to hold – i.e. Check with simulation

35

Page 36: Statistical modelling

Making simple models complex: Adding covariates

36

Page 37: Statistical modelling

What is a covariate?

• A covariate is a quantity that may influence the outcome of interest– Genotype at a SNP– Age of mice when measurement was taken– Batch of chips from which gene expression was measured

• Previously, we have looked at using likelihood to estimate parameters of underlying distributions

• We want to generalise this idea to ask how covariates might influence the underlying parameters

• Much statistical modelling is concerned with considering linear effects of covariates on underlying parameters

37

Page 38: Statistical modelling

What is a linear model?

• In a linear model, the expectation of the response variable is defined as a linear combination of explanatory variables

• Explanatory variables can include any function of the original data

• But the link between E(Y) and X (or some function of X) is ALWAYS linear and the error is ALWAYS Gaussian

bbbb ...3210 iiiii ZXZXYResponse variable

Intercept Linear relationships with explanatory variables

Interaction term

Gaussian error

bbbb ...322

10iZ

ii

Zii XeXY

38

Page 39: Statistical modelling

What is a GLM?

• There are many settings where the error is non-Gaussian and/or the link between E(Y) and X is not necessarily linear

– Discrete data (e.g. counts in multinomial or Poisson experiments)– Categorical data (e.g. Disease status)– Highly-skewed data (e.g. Income, ratios)

• Generalised linear models keep the notion of linearity, but enable the use of non-Gaussian error models

• g is called the link function– In linear models, the link function is the identity

• The response variable can be drawn from any distribution of interest (the distribution function)

– In linear models this is Gaussian

...)( 32101

iiiiii ZXZXgYE bbbbm

39

Page 40: Statistical modelling

Poisson regression

• In Poisson regression the expected value of the response variable is given by the exponent of the linear term

• The link function is the log

• Note that several distribution functions are possible (normal, Poisson, binomial counts), though in practice Poisson regression is typically used to model count data (particularly when counts are low)

...exp)( 3210 iiiiii ZXZXYE bbbbm

40

Page 41: Statistical modelling

Example: Caesarean sections in public and private hospitals

41

Page 42: Statistical modelling

Boxplots of rates of C sections

42

Page 43: Statistical modelling

Fitting a model without covariates

43

> analysis<-glm(d$Caes ~ d$Births, family = "poisson")> summary(analysis)

Call:glm(formula = d$Caes ~ d$Births, family = "poisson")

Deviance Residuals: Min 1Q Median 3Q Max -2.81481 -0.73305 -0.08718 0.74444 2.19103

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.132e+00 1.018e-01 20.949 < 2e-16 ***d$Births 4.406e-04 5.395e-05 8.165 3.21e-16 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 99.990 on 19 degrees of freedomResidual deviance: 36.415 on 18 degrees of freedomAIC: 127.18

Number of Fisher Scoring iterations: 4

Implies an average of 13 per 1000 births

Page 44: Statistical modelling

Fitting a model with covariates

• Unexpectedly, this indicates that public hospitals actually have a higher rate of Caesarean sections than private ones

44

glm(formula = d$Caes ~ d$Births + d$Hospital, family = "poisson")

Deviance Residuals: Min 1Q Median 3Q Max -2.3270 -0.6121 -0.0899 0.5398 1.6626

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.351e+00 2.501e-01 5.402 6.58e-08 ***d$Births 3.261e-04 6.032e-05 5.406 6.45e-08 ***d$Hospital 1.045e+00 2.729e-01 3.830 0.000128 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Implies an average of 15.2 per 1000 births in public hospitals and 5.4 per 1000 births in private ones

Page 45: Statistical modelling

Checking model fit

• Look a distribution of residuals and how well observed values are predicted

45

Page 46: Statistical modelling

What’s going on?

• Initial summary suggested opposite result to GLM analysis. Why?

• Relationship between no. Births and no. C sections does not appear to be linear

• Accounting for this removes most of the apparent differences between hospital types

• There is also one quite influential outlier

46

Normal Caesarean

Private 412 16

Public 21121 285

Relative risk Private (compared to public) = 2.8

Page 47: Statistical modelling

Simpson’s paradox

• Be careful about adding together observations – this can be misleading

• E.g. Berkeley sex-bias case

47

Applicants % admitted

Men 8442 44%

Women 4321 35%

Major Men WomenApplicants % admitted Applicants % admitted

A 825 62% 108 82%B 560 63% 25 68%C 325 37% 593 34%D 417 33% 375 35%E 191 28% 393 24%F 272 6% 341 7%

Appears that women have lower success

But actually women are typically more successful at the Departmental level, just apply to more competitive subjects

Page 48: Statistical modelling

48

Page 49: Statistical modelling

Finding MLEs in GLM

• In linear modelling we can use the beautiful compactness of linear algebra to find MLEs and estimates of the variance for parameters

• Consider an n by k+1 data matrix, X, where n is the number of observations and k is the number of explanatory variables, and a response vector Y

– the first column is ‘1’ for the intercept term

• The MLEs for the coefficients (b) can be estimated using

• In GLMs, there is usually no such compact analytical expression for the MLEs– Use numerical methods to maximise the likelihood

YXXXβ TT 1ˆ ))(,(~ˆ 21XXββ TN

49

Page 50: Statistical modelling

Testing hypotheses in GLMs

• For the parameters we are interested in we typically want to ask how much evidence there is that these are different from zero

• For this we need to construct confidence intervals for the parameter estimates

• We could estimate the confidence interval by finding all parameter values with log-likelihood no greater than 1.92 units worse than the MLE (2-unit support interval)

• Alternatively, we might appeal to the central limit theorem (CLT) and use bootstrap techniques to estimate the variance of parameter estimates

• However, we can also appeal to theoretical considerations of likelihood (again based on the CLT) that show that parameter estimates are asymptotically normal with variance described by the Fisher information matrix

• Informally, the information matrix describes the sharpness of the likelihood curve around the MLE and the extent to which parameter estimates are correlated

50

Page 51: Statistical modelling

Logistic regression

• When only two types of outcome are possible (e.g. disease/not-disease) we can model counts by the binomial

• If we want to perform inference about the factors that influence the probability of ‘success’ it is usual to use the logistic model

• The link function here is the logit

...exp1

...exp)(210

210

ii

iii ZX

ZXYEbbb

bbb

m

mm1

log)(g

51

Page 52: Statistical modelling

Example: testing for genotype association

• In a cohort study, we observe the number of individuals in a population that get a particular disease

• We want to ask whether a particular genotype is associated with increased risk

• The simplest test is one in which we consider a single coefficient for the genotypic value

Genotype AA Aa aa

Genotypic value

0 1 2

Frequency in population

2 21 12

Probability of disease

p0 p1 p2

][ 1011

iGi ep bb

01

2

b0 = -4b1 = 2

52

Page 53: Statistical modelling

A note on the model

• Note that each copy of the risk allele contribute in an additive way to the exponent

• This does not mean that each allele ‘adds’ a fixed amount to the probability of disease

• Rather, each allele contributes a fixed amount to the log-odds

• This has the effect of maintaining Hardy-Weinberg equilibrium within both the cases and controls

iGP

P10)diseaseNot (

)disease(logoddslog bb

53

Page 54: Statistical modelling

Cont.

• Suppose in a given study we observe the following counts

• We can fit a GLM using the logit link function and binomial probabilities

• We have genotype data stored in the vector gt and disease status in the vector status

• Using R, this is specified by the command– glm(formula = status ~ gt, family = binomial)

Genotype 0 1 2

Counts with disease

26 39 21

Counts without disease

1298 567 49

54

Page 55: Statistical modelling

Interpreting results

Call:glm(formula = status ~ gt, family = binomial)

Deviance Residuals: Min 1Q Median 3Q Max -0.8554 -0.4806 -0.2583 -0.2583 2.6141

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -4.6667 0.2652 -17.598 <2e-16 ***gt 1.2833 0.1407 9.123 <2e-16 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 967.36 on 1999 degrees of freedomResidual deviance: 886.28 on 1998 degrees of freedomAIC: 890.28

Number of Fisher Scoring iterations: 6

MLE for coefficients

Standard error for estimates

Estimate/std. error

P-value from normal distribution

Measure of contribution of individual observations to overall goodness of fit (for MLE model)

Measure of goodness of fit of null (compared to saturated model)

Measure of goodness of fit of fitted model

Penalised likelihood used in model choiceNumber of iterations used to find MLE

55

Page 56: Statistical modelling

Adding in extra covariates

• Adding in additional explanatory variables in GLM is essentially the same as in linear model analysis

• Likewise, we can look at interactions

• In the disease study we might want to consider age as a potentially important covariate

glm(formula = status ~ gt + age, family = binomial)

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -7.00510 0.79209 -8.844 <2e-16 ***gt 1.46729 0.17257 8.503 <2e-16 ***age 0.04157 0.01903 2.185 0.0289 *

56

Page 57: Statistical modelling

Adding model complexity

• In the disease status analysis we might want to generalise the fitted model to one in which each genotype is allowed its own risk

][ 2111011

iGiG IIi ep bbb

glm(formula = status ~ g1 + g2 + age, family = binomial)

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.46870 0.73155 -7.476 7.69e-14 ***g1TRUE 1.25694 0.26278 4.783 1.72e-06 ***g2TRUE 3.00816 0.33871 8.881 < 2e-16 ***age 0.04224 0.01915 2.206 0.0274 * ---Null deviance: 684.44 on 1999 degrees of freedomResidual deviance: 609.09 on 1996 degrees of freedomAIC: 617.09

57

Page 58: Statistical modelling

Model choice

• It is worth remembering that we cannot simply identify the ‘significant’ parameters and put them in our chosen model

• Significance for a parameter tests the marginal null hypothesis that the coefficient associated with that parameter is zero

• If two explanatory variables are highly correlated then marginally neither may be significant, yet a linear model contains only one would be highly significant

• There are several ways to choose appropriate models from data. These typically involve adding in parameters one at a time and adding some penalty to avoid over-fitting.

58