structural equation modeling continued: lecture 2 psy 524 ainsworth

32
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Upload: dominick-skinner

Post on 24-Dec-2015

222 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Structural Equation Modeling

Continued: Lecture 2Psy 524

Ainsworth

Page 2: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra Underlying parameters in SEM

Regression Coefficients Variances and Covariances

A hypothesized model is used to estimate these parameters for the population (assuming the model is true) The parameter estimates are used to create a

hypothesized variance/covariance matrix The sample VC matrix is compared to the

estimated VC

Page 3: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra Three basic rules of covariance algebra

COV(c, X1) = 0

COV(cX1, X2) = c * COV(X1, X2)

COV(X1 + X2, X3) = COV(X1, X3) + COV(X2, X3)

Reminder: Regular old fashioned regression Y = bX + a

Page 4: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Example Modeltreatment

group(X1)

examscore(Y2)

motivate(Y1)

b (11)

c (21)

a(

1

Page 5: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra In covariance structure intercept is not

used Given the example model here are the

equations Y1=11X1 + 1

Here Y1 is predicted by X1 and e1 only X1 is exogenous so is used to indicate the weight

Y2=21Y1 + 21X1 + 2

Y2 is predicted by X1 and Y1 (plus error) Y1 is endogenous so the weight is indicated by The two different weights help to indicate what type

of relationship (DV, IV or DV, DV)

Page 6: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra Estimated Covariance given the model Cov(X1,Y1) the simple path

Since Y1=11X1 + 1 we can substitute

COV(X1,Y1) = Cov(X1, 11X1 + 1) Using rule 3 we distribute:

COV(X1, 11X1 + 1) = Cov(X1, 11X1) + (X1, 1)

By definition of regression (X1, 1) = 0 So this simplifies to:

Cov(X1, 11X1 + 1) = Cov(X1, 11X1)

Using rule 2 we can pull 11 out: COV(X1,Y1) = 11Cov(X1, X1)

COV(X1, X1) is the variance of X1

So, Cov(X1,Y1) = 11x1x1

Page 7: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra Estimated Covariance given the model COV(Y1,Y2) the complex path

Substituting the equations in for Y1 and Y2 COV(Y1,Y2) = COV(11X1 + 1, 21Y1 + 21X1 + 2)

Distributing all of the pieces COV(Y1,Y2) = COV(11X1, 21Y1) + COV(11X1, 21X1) +

COV(11X1, 2) + COV(1, 21Y1) + COV(1, 21X1) +

COV(1, 2)

Nothing should correlate with e so they all drop out

COV(Y1,Y2) = COV(11X1, 21Y1) + COV(11X1, 21X1) Rearranging:

COV(Y1,Y2) = 1121COV(X1,Y1) + 1121COV(X1,X1)

COV(Y1,Y2) = 1121x1y1 + 1121x1x1

Page 8: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Back to the path modeltreatment

group(X1)

examscore(Y2)

motivate(Y1)

b (11)

c (21)

a(

1

COV(X1,Y1) = 112x1

COV(Y1,Y2) = 1121x1y1 + 1121X1

Page 9: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Example

NUMYRS DAYSKI SNOWSAT FOODSAT SENSEEKNUMYRS 2.74 0.80 0.68 0.65 2.02DAYSKI 0.80 3.25 0.28 0.35 2.12SNOWSAT 0.68 0.28 1.23 0.72 2.02FOODSAT 0.65 0.35 0.72 1.87 2.12SENSEEK 2.02 2.12 2.02 2.12 27.00

Covariance Matrix

Page 10: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Example

NUMYRSnumber of

years skiedV1

DAYSKItotal number of

days skiedV2

SNOWSATsnow

satisfactionV3

FOODSATfood

satisfactionV4

LOVESKIlove of skiing

F11.00

SKISATSki trip satisfaction

F2

*

* 1*

*

*

SENSEEKsensationseekingV5 (*)

D2*

E4*

E1*

E2*

E3*

1

1

1

1

Page 11: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

SEM models All of the relationships in the model are

translatable into equations Analysis in SEM is preceded by specifying

a model like in the previous slide This model is used to create an estimated

covariance matrix

Page 12: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

SEM models The goal is to specify a model with an

estimated covariance matrix that is not significantly different from the sample covariance matrix

CFA differs from EFA in that the difference can be tested using a Chi-square test If ML methods are used in EFA a chi-square test

can be estimated as well

Page 13: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Specification Bentler-Weeks model

Matrices – Beta matrix, matrix of regression coefficients of

DVs predicting other DVs – Gamma matrix, matrix of regression coefficients

of DVs predicted by IVs – phi matrix, matrix of covariances among the IVs eta matrix, vector of DVs xi matrix, vector of IVs

Bentler-Weeks regression model

1 1 1

where q is number of DVs and r is number of IVs

qXqqX qX qXr rX

Page 14: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Specification Bentler-Weeks model

Only independent variables have covariances (phi matrix)

This includes everything with a single headed arrow pointing away from it (e.g. E’s, F’s, V’s, D’s, etc.)

The estimated parameters are in the and matrices

Page 15: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Specification Bentler-Weeks model matrix form

Page 16: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Diagram again

NUMYRSnumber of

years skiedV1

DAYSKItotal number of

days skiedV2

SNOWSATsnow

satisfactionV3

FOODSATfood

satisfactionV4

LOVESKIlove of skiing

F11.00

SKISATSki trip satisfaction

F2

*

* 1*

*

*

SENSEEKsensationseekingV5 (*)

D2*

E4*

E1*

E2*

E3*

1

1

1

1

Page 17: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Specification Phi Matrix

It is here that other covariances can be specified

Page 18: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Model estimation in SEM requires start

values There are methods for generating good start

values Good means less iterations needed to estimate

the model Just rely on EQS or other programs to generate

them for you (they do a pretty good job)

Page 19: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation These start values are used in the first round of

model estimation by inserting them into the Bentler-Weeks model matrices (indicated by a “hat”) in place of the *s

Using EQS we get start values for the example, and which indicate estimated matrices

Page 20: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation

Page 21: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Selection Matrices (G) These are used to pull apart variables to

use them in matrix equations to estimate the covariances

So that Y = Gy * and Y is only measured dependent variables

Page 22: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Gx = [1 0 0 0 0 0 0] so that:

X = Gx * and X is (are) the measuredvariable(s)

Rewriting the matrix equation to solve for we get:

=(I-)-1 This expresses the DVs as linear combinations

of the IVs

Page 23: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation The estimated population covariance

matrix for the DVs is found by:

1 1 '( ) '( )yy y yG I I G

Page 24: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Estimated covariance matrix between DVs

and IVs1 '( )

0.00

0.00

.19

.19

yx x x

yx

G I G

Page 25: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation

'

27

xx x x

xx

G G

The covariance(s) between IVs is estimated by :

Page 26: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Programs like EQS usually estimate all

parameters simultaneously giving an estimated covariance matrix sigma-hat

S is the sample covariance matrix The residual matrix is found by subtracting

sigma-hat from S This whole process is iterative so that after

the first iteration the values of the hat matrices output are input as new starting values

Page 27: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation Iterations continue until the function

(usually ML) is minimized, this is called convergence

After five iterations the residual matrix in the example is:

Page 28: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation When the model converges you get

estimates of all of the parameters and a converged residual matrix

Page 29: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation

Page 30: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Evaluation 2 is based in the function minimum (from

ML estimation) when the model converges The minimum value is multiplied by N – 1 From EQS:

(.08924)(99) = 8.835 The DFs are calculated as the difference

between the number of data points (p(p + 1)/2) and the number of parameters estimated

In the example 5(6)/2 = 15, and there are 11 estimates leaving 4 degrees of freedom

Page 31: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Model Estimation 2(4) = 8.835, p = .065 The goal in SEM is to get a non-significant chi-

square because that means there is little difference between the hypothesized and sample covariance matrices

Even with non-significant model you need to test significance of predictors Each parameter is divided by its SE to get a Z-

score which can be evaluated SE values are best left to EQS to estimate

Page 32: Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Final Model Final example model with

(unstandardized) and standardized values