tf7.manova.ovhds.2013

Upload: gifugifu

Post on 14-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    1/44

    1

    BETWEEN-SUBJECTS

    MULTIVARIATE ANALYSIS OF VARIANCE

    (Chapters 7)

    Many people who become psychologists are motivated by a desire to improve

    the quality of life within their society and a strong belief in the ability of

    science to help in this regard. It is natural, therefore, that psychological

    research has emphasized the identification of manipulable variables through

    experimental research programs.

    As theory became more sophisticated, researchers focussed a little more on

    understanding complex social problems and designing interventions to

    alleviate these problems. However, efficacious causal agents are likely to have

    multiple effects some of which are negative as well as positive. Increasingly,

    therefore, psychologists have designed laboratory and field experiments to test

    hypotheses involving the complex relationship between a set of interacting

    independent variables and a set of dependent variables. It is this situation

    that requires the use of multivariate analysis of variance (MANOVA).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    2/44

    2

    Consider a relatively simple 2 x 2 x 2 factorial design. Analyzing one

    dependent variable in an experiment that uses this design involves testing

    three main effects, three two-way interactions, and one three-way interaction.

    Analyzing three dependent variables, therefore, involves 21 separate and

    independent significance tests; a situation that is likely to result in a Type 1

    error. Using dependent variables in combination through MANOVA

    circumvents this high experimentwise error rate. Then ANOVA is used toidentify the main source of these effects.

    Given that the researcher is mainly interested in the results of the ANOVA,

    this analysis strategy relegated MANOVA to a preliminary analysis

    procedure. However, it was soon realized that it had far more potential.

    First, MANOVA combines dependent variables in different ways so as to

    maximize the separation of the conditions specified by each comparison in the

    design. This is new information that is not available through ANOVA.

    As well, this property of MANOVA can be used to identify those dependent

    variables that clearly separate important social groups. This use of MANOVA

    is called Discriminant Function Analysis (not covered in this coursesee

    chapter 9).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    3/44

    3

    Analysis of variance designs including a within-subjects factor having more

    than 2 levels must meet the sphericity assumption. However, the most

    common within-subject factor in applied research is time of testing (pretests

    and posttests) which clearly violate this assumption. Specifying these pretests

    and posttests as dependent variables in a special form of MANOVA called

    profile analysis, avoids this problem making it the analysis strategy of choice

    in this circumstance.

    The key research questions addressed by MANOVA are very much the sameas those addressed by an ANOVA design. If participants are assigned at

    random to the experimental conditions in the design, MANOVA allows a

    researcher to examine the causal influence of the independent variables on a

    set of dependent variables both alone (main effects) and in interaction.

    Further, with equal numbers of participants in each cell of the design, these

    main effects and interactions are independent of one another making the

    interpretation of the results very clear.

    In most laboratory experiments the independent variables are manipulated in

    an artificial and simplified social context that maximizes their effects on the

    dependent variables while minimizing within-cell variance. It is not very

    productive, therefore, to know the magnitude of the effects caused by these

    independent variables.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    4/44

    4

    In field experiments this artificiality is often not present making the impact of

    the independent variables on the dependent variables a truer estimate of the

    impact of these variables in social life. In this circumstance, as well as within

    field experiments using weaker quasi-experimental designs, the amount of the

    variance in the dependent variables explained by the independent variables is

    important to know and can be determined using MANOVA.

    MANOVA involves testing the influence of the independent variables upon aset of dependent variables. Therefore, it is possible to examine which of these

    dependent variables is most impacted by these independent variables. That is,

    the relative impact of the independent variables on each dependent variable

    can be estimated.

    MANOVA (like ANOVA) can also be extended to include covariates

    (MANCOVA). That is, the influence of the independent variables on the

    dependent variables controlling for important covariates can be assessed.

    Hypotheses often specify particular contrasts within an overall interaction

    such as a comparison of the treatment group with a placebo control group

    and a no-treatment control group at posttest. MANOVA allows tests of these

    specific hypotheses as well as the overall main effects and interactions.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    5/44

    5

    Testing Assumptions and Other Practical Issues

    Before Conducting MANOVA

    When to use MANOVA

    On page 251, TF argue that MANOVA should be used when the dependent

    variables are independent of one another. In this situation the various

    different impacts of the independent variables is most likely to be

    completely assessed. As well, they argue that when dependent variables

    are highly correlated, it might be more advisable to combine these

    variables into a composite and use a simpler analysis. However, this

    position does not seem to address the situation most likely to be faced by

    researchers using MANOVA; namely, that a set of dependent variables

    measuring different but related constructs are employed to examine the

    effects of the independent variables (usually DVs are moderately

    correlated). It is this practical imperative that leads other statistical

    authors (and TF on page 270!) to state that this is the situation in which

    MANOVA should be the analysis strategy of choice.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    6/44

    6

    Applied researchers often design field experiments or evaluations of social

    programs using a multiplist research design strategy. As is commonly

    known by these researchers, measuring several different outcomes using a

    variety of different methods is one easy way to be reasonably sure that the

    study will result in usable information. In this situation it is of little

    consequence whether some or all the dependent variables are correlated.

    The fact remains that they must be analyzed controlling for

    experimentwise error!

    Tabachnickand Fidells advice on page 251 does not reflect the most

    common research strategy used by experimental and applied psychologists

    alike: the strategy in which sets of dependent variables bracketing a

    general but loose construct are entered into separate MANOVAs. For

    example, indicators of psychological distress might be the dependent

    variables in one analysis, while indicators of physical health problems

    might be the dependent variables in another. Each MANOVA can then

    give information on the dependent variable(s) in each set that is most

    affected by the treatment (experimental manipulation).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    7/44

    7

    Multivariate Normality and the Detection of Outliers

    Multivariate analysis of variance procedures make the assumption that the

    various means in each cell of the design and any linear combination of

    these means are normally distributed. Provided there are no outliers in

    each cell of the designthe analysis is robust to this assumption, especially

    since the Central Limit Theorem states that the distribution of sample

    means derived from a non-normal distribution of raw scores tends to be

    normal. As a rule of thumb, applied researchers try to achieve fairly equal

    numbers of participants within each cell of the design and a minimum

    with-cell sample size ofat least 20 (I prefer 20 plus the number of dependent

    variables)so as to ensure that this assumption is not seriously violated.However, the most important guideline is to check each cell for outliers

    before running any MANOVA.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    8/44

    8

    Homogeneity of the Variance-Covariance Matrix

    In ANOVA, this is the homogeneity of variance assumption for the dependent

    variable: the variance within each cell of the design is assumed to be equal.

    In MANOVA the equivalent assumption is that the variancecovariance

    matrix for the dependent variables within each cell of the design is equal. I t is

    important to remove or correct for outl iers before checking thi s assumption as

    they greatly inf luence the values in the variance

    covar iance matrices. If thecell sizes are relatively equal and the outliers have been dealt with, the

    analysis is robust to this assumption.

    If cell sizes are unequal, use Boxs M test of the homogeneity of the variance

    covariance matrices. This test tends to be too sensitive and so Tabachnick and

    Fidell recommend that the researcher only be concerned if this test is

    significant at the p < .001 level and cell sizes are unequal.

    As well, if larger cell sizes are associated with larger variances and

    covariances, the significance levels are conservative. It is only when smaller

    cell sizes are associated with larger variances that the tests are too liberal

    indicating some effects are significant when they really are not (a high Type 1

    error rate).

    Fmax is the ratio of the largest to the smallest cell variance. An Fmax as

    large as 10 is acceptable provided that the within-cell sample sizes are within a

    4 to 1 ratio. More discrepant cell sizes cause more severe problems.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    9/44

    9

    Unequal Cell Sizes

    The homogeneity of the variancecovariance matrices can not be tested if

    the number of dependent variables is greater than or equal to the number

    of research participants in any cell of the design. Even when this

    minimum number of respondents is achieved, this assumption is easily

    rejected and the power of the analysis is low when the number of

    respondents is only slightly greater than the number of dependent

    variables. This can result in the MANOVA yielding no significant effects,

    even though individual ANOVAs yield significant effects supporting thehypothesis, a very undesirable outcome to say the least! Hence the rule of

    thumb of 20 plus the number of dependent variables as a minimum cell size

    and the strategy of analysing small sets of dependent variables that often

    measure a general, loosely defined construct (e.g., indicators of

    psychological distress).

    Power for MANOVA is more complex than for ANOVA as it depends upon

    the relationships among the dependent variables. The Appendix shows how

    you can calculate power a pri oriif you have information from past

    research (not on the exam).

    Both GLM and MANOVA calculates an approximate observed power

    value which is the probability that the F value for a particular multivariate

    main effect or interaction is significant if the differences among the means

    in the population is identical to the differences among the means in the

    sample. In addition, GLM calculates an estimate of the effect size for each

    main effect and interaction, partial 2.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    10/44

    10

    Alternative Methods for Dealing with Unequal Cell Sizes

    When the cell sizes in a factorial design are unequal, the main effects are

    correlated with the interactions. This means that adjustments have to be

    made so that the interpretation of these effects are clear. For designs in

    which all cells are equally important (i.e., the sample size does not reflect

    the population size), Method 1 is used in which all the effects are calculated

    partialling out every other effect in the design (similar to the standardmultiple regression approach). This method is labelled as METHOD =

    UNIQUE in SPSS MANOVA and as METHOD = SSTYPE(3) in the

    General Linear Model (GLM) SPSS program (the SPSS defaults).

    For non-experimental designs in which sample sizes reflect the relative

    sizes of the populations from which they are drawn (their relative

    importance), Method 2 is used in which there is a hierarchy for testing

    effects starting with the main effects (and covariates) which are not

    adjusted, then the 2 way interaction terms which are adjusted for the main

    effects, etc... This method is labelled METHOD = SEQUENTIAL in SPSS

    MANOVA and METHOD = SSTYPE(1) in SPSS GLM.

    If the researcher wants to specify a particular sequence to the hierarchy in

    which the main effects and interactions are tested, Method 3 is used. This

    method is also labelled METHOD = SEQUENTIAL in SPSS MANOVA

    and METHOD = SSTYPE(1) in SPSS GLM with the sequence specified on

    the DESIGN subcommand.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    11/44

    11

    Linearity

    As with all analyses that rely on correlations or variancecovariances, the

    assumption is that all dependent variables and covariates are linearly

    related within each cell of the design. Although scatter plots are used to

    check this assumption, they only give a rough guide as the sample size

    within each cell is quite small.

    Multicollinearity

    Although the dependent variables are intercorrelated, it is not desirable to

    have redundancy among them. Both GLM and MANOVA analyses output

    the pooled within-cell correlations among the dependent variables. As well,

    MANOVA prints out the determinant of the within-cell variance -

    covariance matrix which Tabachnick and Fidell suggest should be greater

    than .0001. If these indices suggest that multicollinearity is a problem, the

    redundant dependent variable can be deleted from the analysis, or a

    Principal Components Analysis can be done on the pooled within-cell

    correlation matrix. The factor scores from this analysis are then used as

    the dependent variables in the MANOVA. For SPSS GLM the computer

    will guard against extreme multicollinearity by calculating a tolerance

    value for each dependent variable and comparing it to 10-8. The analysis

    will not run if the tolerance is less than this value. However, this is an

    extreme level of multicollinearity.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    12/44

    12

    Homogeneity of Regression and Reliability of Covariates

    When covariates are used in the analysis (MANCOVA) or if the researcher

    plans to use the Roy-Bargmann stepdown procedure to examine the

    relative importance of the individual dependent variables, the relationship

    between the dependent variables and the covariates MUST be the same

    within every group in the design. That is, the slope of the regression line

    must be the same in every experimental condition. If heterogeneity of

    regression is found, then the slopes of the regression lines differ; that is,there is an interaction between the covariates and the independent

    variables. If this occurs, MANCOVA is an inappropriate analysis strategy

    to use.

    Before running a MANCOVA or the Roy-Bargmann procedure, therefore,

    the pooled interactions among the covariates and the independent variables

    must be shown to be non-significant (usually, p < .01 is used to detect the

    significance of these pooled interaction terms). In addition, the covariates

    must be reasonably reliable ( > .80see TF, page 255). Using unreliablecovariates can results in the effects of the independent variables on the

    dependent variables being either under-adjusted or overadjusted making

    the results of the MANCOVA suspect.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    13/44

    13

    The Mathematical Basis of Multivariate Analysis of Variance

    The mathematical basis of MANOVA is explained by extension using

    the mathematical basis of ANOVA. Given that there is more than one

    dependent variable in MANOVA, this analysis includes the SSCP matrix, S,

    among these dependent variables. Significance tests for the main effects and

    interactions obtained through the MANOVA procedure compare ratios of

    determinants of the SSCP matrices calculated from between group differencesand pooled within-group variability. (Directly analogous to calculating the F

    ratio by dividing the mean square between by the mean square within in

    ANOVA.) The key point to grasp here is that the determinant of a SSCP

    matrix can be conceptualised as an estimate of the generalized variance minus

    the generalized covariance in this matrix. To show this consider the

    determinant of a correlation matrix for two dependent variables:

    1 r12

    = (1r122)

    r12 1

    As this example clearly shows, the determinant is the proportion of the

    variance not common to these two interrelated variables (see also Appendix A,

    p. 932 for a different but equivalent explanation using a variancecovariance

    matrix).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    14/44

    14

    A difference from ANOVA is that the significance of each effect calculated by

    the MANOVA procedure is for a linear combination of the dependent

    variables that maximizes the differences among the cells defined by that

    effect. This means that each effect is associated with a differentlinear

    combination of the dependent variables that satisfy this criteria and knowing

    how the dependent variables are weighted for each significant effect is of

    interest to the researcher. The implication is that the proportion of variance

    accounted for by all the effects combined is greater than 1 (the upper limit for

    a correlation and a square multiple correlation) making it difficult to know

    exactly how much variance is accounted for by each of the significant effectsidentified through MANOVA, although their relative strengths are known.

    In an ANOVA of a between-subjects factorial design, the total sum of squares

    can be partitioned into the sum of squares associated with each main effect

    and interaction (more generally into orthogonal contrasts) and the pooled

    within-cell sum of squares. With the exception of the last sum of squares, these

    are derived from the deviations of mean scores from the grand mean.

    Estimates of variance are then derived from each of these sum of squares (the

    mean squares) and the significance of the effect is tested by examining the

    ratio of each of the various between-groups mean squares with the pooled

    within-groups mean square (the F ratio).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    15/44

    15

    In an analogous manner, a MANOVA of a between-subjects factorial design

    uses the SSCP matrix, S, for each effect in the design which are derived by

    post multiplying the matrix of difference scores for the effect (meangrand

    mean) by its transpose. The determinants (the generalized variance) of these

    matrices can then be used to test whether the effect is significant or not.

    To illustrate this process, consider a simple 2 x 2 between-subjects design with

    10 participants in each condition who answered 3 dependent variables (for acomplete worked example, see TF section 7.4.1, pp. 255-263). Then the matrix

    of difference scores between the mean and the grand mean on each dependent

    variable for the first factor, A, is:

    Levels of Factor A

    A1 A2

    DV1 M11 - MG1 = m11 M12 - MG1 = m12

    DV2 A = M21 - MG2 = m21 M22 - MG2 = m22

    DV3 M31 - MG3 = m31 M32 - MG3 = m32

    where M11 is the marginal mean for the first dependent variable at the first

    level of A (A1), M12 is the marginal mean for the first dependent variable at

    the second level of A (A2), etc..., and MG1 is the grand mean for the first

    dependent variable, etc...

    A is a 3 (dependent variables) by 2 (levels of factor A) matrix and A AT

    is the

    3 x 3 SSCP matrix summing over the two levels of factor A.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    16/44

    16

    SA = SSCPA = A . AT

    = a 3 x 3 SSCP matrix for factor A which

    gives the sum of squares and cross products for the deviations of the marginal

    mean values for factor A around the grand mean for all three dependent

    variables (summing over levels of factor A).

    m112

    + m122

    m11m21 + m12m22 m11m31 + m12m32

    = m21m11 + m22m12 m212

    + m222

    m21m31 + m22m32

    m31m11 + m32m12 m31m21 + m32m22 m312

    + m322

    The means are based upon the scores of 20 participants, so this matrix is

    multiplied by 20 to estimate the sum of squares associated with the main effect

    for factor A. The determinant of this matrix is an estimate of the generalized

    variance associated with this main effect (the sum of the squares minus the

    sum of the cross products). The diagonal elements in this matrix are the sum

    of the squares for each of the three dependent variables (summed across levels

    of factor A).

    In a similar fashion, the generalized variance associated with the main effect

    for B can be derived as can the generalized variance for the interaction

    between A and B (The SSCP matrix for the four cells of the design is

    estimated, SSCPA, B, AB, and the matrices for the two main effects are

    subtracted from it to give the SSCP for the interaction term alone, SSCPAB.)

    Finally, the average within cell SSCP matrix is estimated.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    17/44

    17

    All these matrices are symmetrical square matrices with an order determined

    by the number of dependent variables in the analysis. Therefore, they can be

    added to one another. In particular, the SSCP within cell error matrix can be

    added to each of the matrices associated with the main effects and interactions

    (or, more generally, to any SSCP matrix derived from a contrast). When this

    is done, a statistics called Wilks' Lambda, , can be calculated as follows:

    = Serror / Seffect + SerrorThis statistic can be converted into an approximate F test which is

    outputted by the computer along with its degrees of freedom and statistical

    significance. This approximate F is part of the SPSS output, but it can be

    calculated by hand using the following formulae:

    Approximate F (df1, df2) = (1 - y)(df2) / y (df1)

    Given that there are p dependent variables, y = ( )1/swhere s = [(p

    2dfeffect

    2 4) / (p

    2+ dfeffect

    2- 5)]

    1/2

    df1 = p dfeffect

    df2 = s [ dferror - (p - dfeffect + 1) / 2] - [ (p dfeffect - 2) / 2 ]

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    18/44

    18

    In analysis of variance, the proportion of the variance accounted for by each

    main effect and interaction can be calculated. Similarly, in MANOVA the

    proportion of variance accounted for by the linear combination of the

    dependent variables that maximizes the separation of the groups specified by

    a main effect or an interaction is simply:

    2 = 1 - ,

    remembering that Wilks' Lambda is an index of the proportion of the total

    variance associated with the within cell error on the dependent variables.

    However, the 2values for any given MANOVA tend to yield high values thatsum to a value greater than 1. Therefore, TF recommend a more

    conservative index:

    Partial 2 = 1 - ()1/swhere s = [(p

    2dfeffect

    2 4) / (p

    2+ dfeffect

    2- 5)]

    1/2

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    19/44

    19

    Other Criteria for Statistical Significance

    Most usually researchers use the approximate F derived from Wilks' Lambda

    as the criterion for whether an effect in MANOVA is significant or not.

    However, three other criteria are also used and the SPSS MANOVA and

    GLM programs output all four. These criteria are equivalent when the effect

    being tested has one degree of freedom, but are slightly different otherwise

    because they create a linear combination of the dependent variables that

    maximizes the separation of the groups in slightly different ways.

    Pillais Trace is derived by extracting eigenvalues associated with each main

    effect and interaction in the design. Like factor analysis, larger eigenvalues

    correspond to larger percentages of variance account for by these effects.

    Pillai, therefore, derived an approximate F test to test the extent to which

    these eigenvalues are unlikely to occur by chance if the null hypothesis is true.

    For applied researchers, Pillais Trace is an important statistic because it is

    more robust to the assumption of homogeneity of the variancecovariance

    matrix, especially when there are unequal ns in the cells of the design. When

    there are problems with the research design, Pillais criterion is the one to use

    as it is the more conservative and the more robust test. Otherwise use Wilks

    Lambda.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    20/44

    20

    A SIMPLE MULTIVARIATE ANALYSIS OF VARIANCE

    Consider the following example of a 2 x 3 between-subjects factorial design inwhich high school students were taught typing skills. Method of teaching was

    the first independent variable: either the traditional method or a motivational

    method was used. The second independent variable was the intensity of the

    instruction: 2 hours per day for 6 weeks, 3 hours a day for 4 weeks, or 4

    hours a day for 3 weeks. The two dependent variables were typing speed and

    accuracy. The following syntax shows how to conduct a multivariate analysis

    of variance using the SPSS GLM program and the SPSS MANOVA program.

    The latter can only by used by the syntax method (you can not use windows)

    and is a complicated program to use. However, it is a very versatile program

    that is worth knowing.

    Using the General Linear Model program, the syntax for this design is:

    GLMspeed accuracy BY method intensit

    /METHOD = SSTYPE(3)/INTERCEPT = INCLUDE/PRINT = DESCRIPTIVE ETASQ OPOWER TEST(SSCP) RSSCP

    HOMOGENEITY/CRITERIA = ALPHA(.05)/DESIGN = method intensit method*intensit .

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    21/44

    21

    Descriptive StatisticsMETHOD INTENSIT Mean Std.

    DeviationN

    SPEED 1 1 34.30 5.89 10

    2 32.50 6.20 103 29.60 4.38 10

    Total 32.13 5.70 302 1 42.80 5.45 10

    2 35.50 3.81 103 27.00 3.40 10

    Total 35.10 7.77 30Total 1 38.55 7.04 20

    2 34.00 5.24 203 28.30 4.04 20

    Total 33.62 6.92 60ACCURACY 1 1 21.80 2.25 10

    2 17.90 2.73 103 14.10 1.29 10

    Total 17.93 3.82 302 1 25.60 1.71 10

    2 18.50 1.35 103 11.60 1.26 10

    Total 18.57 5.98 30

    Total 1 23.70 2.75 202 18.20 2.12 203 12.85 1.79 20

    Total 18.25 4.99 60Note: This shows the cell means, the marginal means, and the grand mean (with

    variances and n).

    Box's Test of Equality of Covariance MatricesBox's M 23.411

    F 1.413

    df1 15df2 15949.680

    Sig. .131Tests the null hypothesis that the observed covariance matrices ofthe dependent variables are equal across groups.

    Note: Boxs M tests homogeneity of the variance covariance matrix.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    22/44

    22

    Multivariate TestsEffect Value F Hypo

    dfError

    dfSig. Partial

    Eta2Observed

    PowerIntercept Pillai' s .992 3104 2 53 .000 .99 1.00

    Wilks' .008 3104 2 53 .000 .99 1.00Hotellings 117.13 3104 2 53 .000 .99 1.00

    Roy's 117.13 3104 2 53 .000 .99 1.00

    METH Pillai's .09 2.69 2 53 .077 .09 .51Wilks' .91 2.69 2 53 .077 .09 .51

    Hotellings .10 2.69 2 53 .077 .09 .51

    Roy's .10 2.69 2 53 .077 .09 .51

    INTENSIT Pillai's .87 20.80 4 108 .000 .44 1.00

    Wilks' .13 45.85 4 106 .000 .64 1.00Hotelling's 6.42 83.49 4 104 .000 .76 1.00

    Roy's 6.42 173.26 2 54 .000 .87 1.00

    METHOD *INTENSIT

    Pillai's .36 6.01 4 108 .000 .18 .98

    Wilks' .64 6.73 4 106 .000 .20 .99Hotelling's .57 7.44 4 104 .000 .22 1.00

    Roy's .57 15.44 2 54 .000 .36 1.00

    Note: The F values for the main effect for intensity and the interaction derived fromWilks' Lambda are slightly larger than those derived from Pillais Trace. Eta squared

    (actually partial 2) is an estimate of the percentage of variance accounted for by the

    main effects and interactions (tend to be an overestimate). The observed power column

    is the power of the design to detect population differences among the means in the maineffect or interaction that are identical to the differences among the means found in the

    sample.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    23/44

    23

    Levene's Test of Equality of Error VariancesF df1 df2 Sig.

    SPEED .456 5 54 .807

    ACCURACY 1.908 5 54 .108Note: This the test of the homogeneity of variance assumption for each dependent

    variable separately. The analysis is robust to this assumption.

    Tests of Between-Subjects EffectsSource D.V. Type III

    SSdf MS F Sig. Eta2 Power

    CorrectedModel

    SPEED 1495.08 5 299.02 12.11 .000 .529 1.000

    ACCUR 1282.55 5 256.51 75.00 .000 .874 1.000Intercept SPEED 67804.82 1 67804.82 2746.58 .000 .981 1.000

    ACCUR 19983.75 1 19983.75 5842.57 .000 .991 1.000

    METHOD SPEED 132.02 1 132.02 5.35 .025 .090 .622ACCUR 6.02 1 6.02 1.76 .190 .032 .256

    INTENSIT SPEED 1055.03 2 527.52 21.37 .000 .442 1.000ACCUR 1177.30 2 588.65 172.10 .000 .864 1.000

    METHOD *INTENSIT

    SPEED 308.03 2 154.02 6.24 .004 .188 .877

    ACCUR 99.233 2 49.617 14.506 .000 .349 .998

    Error SPEED 1333.100 54 24.687ACCUR 184.700 54 3.420

    Total SPEED 70633.000 60ACCUR 21451.000 60

    Note: This table summarizes the univariate analyses of variance on each dependentvariable. Notice that the main effect for Method is significant for Speed but not for

    Accuracy. However, the multivariate test indicates that this main effect is not significant(or marginally significant, p < .10).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    24/44

    24

    Between-Subjects SSCP Matrix

    SPEED ACCURACYHypoth Intercept SPEED 67804.817 36810.250

    ACCURACY 36810.250 19983.750

    METHOD SPEED 132.017 28.183ACCURACY 28.183 6.017

    INTENSIT SPEED 1055.033 1111.550ACCURACY 1111.550 1177.300

    METHOD *INTENSIT

    SPEED 308.033 174.817

    ACCURACY 174.817 99.233

    Error SPEED 1333.100 211.200ACCURACY 211.200 184.700

    Based on Type III Sum of Squares

    Note: With these matrices you can calculate the significance of the effects in

    the design.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    25/44

    25

    Residual SSCP Matrix

    SPEED ACCURACYSSCP SPEED 1333.100 211.200

    ACCURACY 211.200 184.700

    Covariance SPEED 24.687 3.911ACCURACY 3.911 3.420

    Correlation SPEED 1.000 .426ACCURACY .426 1.000

    Based on Type III Sum of Squares

    These are pooled within-cell matrices. The pooled within-cell correlation

    matrix shows that the two dependent variables are correlated quite highly

    across the cells of the design (r = .43). Notice that the residual SSCP matrix is

    the same as the error SSCP matrix above. Its determinant is (1333 x 185 -

    2112) = 0.202 x 10

    6. To calculate Wilks' Lambda, the determinant of this

    SSCP matrix is divided by the determinant of a matrix created by adding the

    error matrix to the effect matrix:

    For example, the denominator of for the Intensity Main effect is:1055 1112 1333 211 2388 1323

    + =

    1112 1177 211 185 1323 1362

    The determinant of this matrix is (2388 x 1362 - 13232) = 1.503 x 10

    6.

    = 0.202 x 106 / 1.503 x 106 = 0.134 (see previous table).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    26/44

    26

    The MANOVA program will produce similar output (same information,

    different format) using the following syntax. In this output, the MANOVA

    program gives the determinant of the pooled within-cell variance - covariance

    matrix which in this case is 69.14.

    MANOVA speed accuracy BY method(1,2) intensit(1,3)/PRINT CELLINFO (ALL) ERROR SIGNIF HOMOGENEITY/DESIGN.

    Sometimes a researcher wants to test specific hypothesized contrasts. This is

    most easily done through the MANOVA program by specifying a set of

    orthogonal contrasts equal to the number of degrees of freedom in the overall

    main effect or interaction. For example, if a researcher wants to compare the

    low intensity practice level (2 hours a day for 6 weeks) with the other two

    levels, and the 3 hours a day for 4 weeks with the 4 hours a day for 3 weeks, he

    or she would specify these contrasts as follows:

    MANOVA speed accuracy BY method(1,2) intensit(1,3)/PRINT CELLINFO (ALL) ERROR SIGNIF HOMOGENEITY/contrast(intensit) = special(1 1 1,

    2 -1 -1,0 1 -1)

    /DESIGN = method intensit(1) intensit(2) method by intensit .

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    27/44

    27

    The output gives the multivariate tests and the univariate tests for each dependent

    variable (t tests) for these specific contrasts within the overall main effect forIntensity. For example, for the second contrast the output looks like:

    EFFECT .. INTENSIT(2)

    Multivariate Tests of Significance (S = 1, M = 0, N = 25 1/2)

    Test Name Value Exact F Hypoth. DF Error DF Sig. of F

    Pillais .60804 41.10873 2.00 53.00 .000Hotellings 1.55127 41.10873 2.00 53.00 .000

    Wilks .39196 41.10873 2.00 53.00 .000

    Roys .60804 (no significance test)

    Note.. F statistics are exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. INTENSIT(2) (Cont.)

    Univariate F-tests with (1,54) D. F.

    Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F

    SPEED 324.900 1333.10000 324.90000 24.68704 13.16075 .001

    ACCUR 286.225 184.70000 286.22500 3.42037 83.68246 .000

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    28/44

    28

    Later in the output, the computer prints out the statistics for the univariate

    test for the specific contrasts tested. In the example below, the univariate

    statistics for the dependent variable, Speed, are given.

    Estimates for SPEED --- Individual univariate .9500 confidence intervals

    INTENSIT(2)

    Parameter Coeff. Std. Err. t-Value Sig. t Lower -95% CL- Upper

    4 5.700 1.57121 3.62778 .00063 2.54991 8.85009

    This output shows that the difference between the marginal means for the

    dependent variable, Speed, comparing the Intensity Conditions 3 hours for 4

    weeks versus 4 hours for 3 weeks is:

    Difference = 34.0 - 28.3 = 5.7 (see marginal means in an earlier table)

    This difference is significantly different from zero showing that typing speed

    is faster in the 3 hours for 4 weeks condition:

    t (54) = 3.63, p < .001 with the 95% confidence intervals as shown.

    Note that t2= F = 13.16 (see bottom of page 27)

    In the same way, the difference in average typing speed between the 2 hours

    for six weeks condition and the remaining conditions is:

    Difference = 2 x 38.55 - (34.0 + 28.3) = 14.8

    The output (not shown) indicates that this contrast is also significant, t (54) =

    5.43, p < .0001, showing that the typing speed is greater in the low intensity

    condition than in the other two conditions.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    29/44

    29

    The Mathematical Basis of Multivariate Analysis of Covariance

    In MANOVA, the covariates are initially considered as one of the dependentvariables. However, the resulting SSCP matrix is partitioned into smaller

    matrices that can be used to adjust the dependent variables for the effects of

    the covariates.

    Consider the matrix for the main effect of A in the 2 x 2 between-subjects

    factorial design discussed earlier:

    SA = SSCPA = A . AT

    m112

    + m122

    [m11m21 + m12m22 m11m31 + m12m32]

    = [m21m11 + m22m12] m212

    + m222

    m21m31 + m22m32

    [m31m11 + m32m12] m31m21 + m32m22 m312

    + m322

    If the first dependent variable is the covariate, then this matrix has three

    components: The SSCP matrix for the dependent variables, S(y)

    , (the bottom

    right 2 x 2 matrix), the sum of squares of the covariate, S(x)

    , (the top left term

    in the matrixwhen there is more than 1 covariate, this is the SSCP matrix

    among the covariates), and the cross product terms between the covariate and

    the dependent variables, S(yx)

    (shown in square brackets).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    30/44

    30

    Specifically:

    S(y) = m212

    + m222

    m21mA31 + m22m32

    m31m21 + m32m22 m312

    + m322

    (The unadjusted SSCP matrix for the DVs)

    S(x) = m112

    + m122

    (The sum of squares for the covariate. For more than one covariate, this

    would be a SSCP matrix).

    S(yx)

    = m21mA11 + m22mA12

    m31mA11 + m32mA12

    (the cross product matrix between the covariate and the DVs)

    To obtain the SSCP matrix for the two dependent variables adjusting

    for the covariate, S* the following matrix equation is used:

    S*

    = S(y)

    - S(yx)

    . (S(x)

    )-1

    . S(yx)T

    (2 x 2) = (2 x 2) - (2 x 1) (1 x 1) (1 x 2)

    This adjustment is applied to the SSCP matrices for all the main effects and

    interactions in the design as well as to the SSCP matrix for the pooled within-

    cell error. Then the test for significance of the effect using (say) Wilks'

    Lambda is applied to the adjusted SSCP matrices as well as to the covariates

    (which should be significant if they are effective). Note that if this covariate

    correlates with the dependent variables, this matrix subtraction results in a

    SSCP matrix with much smaller values.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    31/44

    31

    EXAMPLE OF A SIMPLE

    MULTIVARIATE ANALYSIS OF COVARIANCE

    This is the same example as the one just used to illustrate MANOVA.

    However, this time the Sexof the participants is used in the analysis as a

    covariate because past research has shown that women learn to type more

    quickly and accurately than men given the same amount of instruction

    (hypothetically speaking of course).

    Before a MANCOVA can be conducted, the homogeneity of regression

    assumption must be checked. This is achieved by specifying the variable,

    SEX, as a covariate and then running a MANOVA in which SEX is one of the

    independent variables. If the assumption of homogeneity of regression within

    each cell of the design is not correct, then SEX will interact with the

    independent variables in the design. Therefore, the MANOVA is used to test

    the pooled effect of all of these interactions (after entering the main effect for

    SEX first). The + sign is used to pool together these interactions as is shown

    in the syntax below.

    MANOVA speed accuracy BY method(1,2) intensit(1,3) with sex/ANALYSIS = SPEED ACCURACY

    /DESIGN = SEX, METHOD, INTENSIT, METHOD BY INTENSIT,SEX BY METHOD + SEX BY INTENSIT+ SEX BY METHOD BY INTENSIT .

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    32/44

    32

    Note that the ANALYSIS subcommand is used to specify the dependent

    variables for this preliminary multivariate analysis of variance (to prevent the

    computer running the MANCOVA specified in the first line of the syntax).More generally, the ANALYSIS subcommand can be used to specify which

    of a set of continuous variables should be included in a design as dependent

    variables or covariates. Further, the DESIGN subcommand can be used to

    specify several different designs to analyse. Together, they allow the

    MANOVA program to be very flexible and to run a variety of analyses on

    several different designs using the same data set.

    The syntax on the previous page produces the following output (followed by

    the output on the main effects and interaction of the independent variables

    which should be ignored).

    EFFECT .. SEX BY METHOD + SEX BY INTENSIT + SEX BY METHOD

    BY INTENSIT

    Multivariate Tests of Significance (S = 2, M = 1 , N = 22 1/2)

    Test Name Value Approx. F Hypoth. DF Error DF Sig. of F

    Pillais .11256 .57252 10.00 96.00 .833

    Hotellings .11981 .55113 10.00 92.00 .849

    Wilks .89038 .56185 10.00 94.00 .841

    Roys .07128

    Note.. F statistic for WILKS' Lambda is exact.

    This test shows that the pooled interactions between the covariate and theremaining factors in the design are not significant (remember p < .01 is the

    criterion). Therefore, the assumption of homogeneity of regression has been

    met.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    33/44

    33

    If two or more covariates are used, their effects are pooled and then these

    pooled effects are examined in interaction with the independent variables.

    For example, if both SEX and AGE were covariates in the above example, the

    design syntax to test the homogeneity of regression assumption would read:

    /DESIGN = SEX, AGE, METHOD,

    INTENSIT, METHOD BY INTENSIT,

    POOL(SEX, AGE) BY METHOD+ POOL (SEX, AGE) BY INTENSIT

    + POOL(SEX, AGE) BY METHOD BY INTENSIT .

    After the homogeneity of regression assumption has been checked, the

    researcher is ready to conduct the MANCOVA using the following syntax:

    MANOVA speed accuracy BY method(1,2) intensit(1,3) with sex/PRINT CELLINFO (ALL) ERROR SIGNIF HOMOGENEITY/DESIGN .

    The first part of the output displays the cell and marginal means, standard

    deviations, and n for the two dependent variable followed by a similar table

    for the covariate (not shown).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    34/44

    34

    Then the output gives tables showing the SSCP matrix, the variance

    covariance matrix, and the correlation matrix among the dependent variables

    and the covariate for each cell of the design are displayed (also not shown).

    It is at this point that the output gives the pooled within-cell variancecovariance matrix and its determinant as shown below:

    Pooled within-cells Variance-Covariance matrix

    SPEED ACCURACY SEX

    SPEED 24.687

    ACCURACY 3.911 3.420

    SEX 1.102 .583 .278

    Determinant of pooled Covariance matrix of dependent vars. = 69.14202

    LOG(Determinant) = 4.23616

    This output shows that there is no multicollinearity problem in this data set.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    35/44

    35

    The output then shows the homogeneity of variancecovariance test.

    Multivariate test for Homogeneity of Dispersion matrices

    Boxs M = 23.41071

    F WITH (15,15949) DF = 1.41313, P = .131 (Approx.)

    Chi-Square with 15 DF = 21.21896, P = .130 (Approx.)

    This statistic shows that the assumption of homogeneity of variance

    covariance across the cells in the design is tenable.

    Now the adjusted SSCP matrices are given:

    Adjusted WITHIN CELLS Correlations with Std. Devs. on Diagonal

    SPEED ACCURACY

    SPEED 4.550

    ACCURACY .239 1.496

    This matrix shows that the two dependent variables correlate less when SEXis partialled out of their relationship (r = .239)

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    36/44

    36

    Now the computer prints out the adjusted within-cell SSCP matrix

    Adjusted WITHIN CELLS Sum-of-Squares and Cross-Products

    SPEED ACCURACY

    SPEED 1097.083

    ACCURACY 86.250 118.550

    This is the pooled SSCP matrix adjusted for the covariate, SEX.

    The pooled SSCP matrix and correlation matrix for the MANOVA without

    any covariates are shown below. Note that the effect of the covariate is to

    make the values in the SSCP matrix smaller and to reduce the correlation

    among the dependent variables from .426 to .239 (partialling out the

    covariate).

    An extract from the MANOVA Shown Earlier in These Notes

    Residual SSCP Matrix

    SPEEDACCURACY

    Sum-of-Squares and Cross-Products SPEED1333.100 211.200ACCURACY 211.200 184.700

    Covariance SPEED 24.687 3.911ACCURACY 3.911 3.420

    Correlation SPEED 1.000 .426ACCURACY .426 1.000

    Based on Type III Sum of Squares

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    37/44

    37

    The significant effect of the covariate is now given:

    EFFECT .. WITHIN CELLS RegressionMultivariate Tests of Significance (S = 1, M = 0, N = 25 )

    Test Name Value Exact F Hypoth. DF Error DF Sig. of F

    Pillais .39182 16.75048 2.00 52.00 .000

    Hotellings .64425 16.75048 2.00 52.00 .000

    Wilks .60818 16.75048 2.00 52.00 .000

    Roys .39182

    Note.. F statistics are exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. WITHIN CELLS Regression (Cont.)

    Univariate F-tests with (1,53) D. F.

    Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. F

    SPEED 236.01667 1097.08333 236.01667 20.69969 11.40194 .001

    ACCURACY 66.15000 118.55000 66.15000 2.23679 29.57360 .000

    This part of the analysis shows that the effect of the covariate is significant (it

    is an effective covariate). If the covariate is not significant, there is no need to

    perform a MANCOVA!

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    38/44

    38

    The remaining analyses show both the multivariate and univariate main

    effects and interactions for the independent variables:

    EFFECT .. METHOD BY INTENSIT

    Multivariate Tests of Significance (S = 2, M = -1/2, N = 25 )

    Test Name Value Approx. F Hypoth. DF Error DF Sig. of F

    Pillais .48450 8.47200 4.00 106.00 .000Hotellings .93972 11.98144 4.00 102.00 .000

    Wilks .51552 10.21168 4.00 104.00 .000

    Roys .48445

    Note.. F statistic for WILKS' Lambda is exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. METHOD BY INTENSIT (Cont.)

    Univariate F-tests with (2,53) D. F.

    Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F

    SPEED 308.03333 1097.08333 154.01667 20.69969 7.44053 .001

    ACCUR 99.23333 118.55000 49.61667 2.23679 22.18206 .000

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    39/44

    39

    EFFECT .. INTENSIT

    Multivariate Tests of Significance (S = 2, M = -1/2, N = 25 )

    Test Name Value Approx. F Hypoth. DF Error DF Sig. of F

    Pillais .91428 22.31554 4.00 106.00 .000Hotellings 9.98961 127.36757 4.00 102.00 .000

    Wilks .09056 60.40066 4.00 104.00 .000

    Roys .90896

    Note.. F statistic for WILKS' Lambda is exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. INTENSIT (Cont.)

    Univariate F-tests with (2,53) D. F.

    Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F

    SPEED 1055.03333 1097.08333 527.51667 20.69969 25.48428 .000

    ACCUR 1177.30000 118.55000 588.65000 2.23679 263.16702 .000

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. METHOD

    Multivariate Tests of Significance (S = 1, M = 0, N = 25 )

    Test Name Value Exact F Hypoth. DF Error DF Sig. of F

    Pillais .12420 3.68727 2.00 52.00 .032

    Hotellings .14182 3.68727 2.00 52.00 .032

    Wilks .87580 3.68727 2.00 52.00 .032

    Roys .12420

    Note.. F statistics are exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    EFFECT .. METHOD (Cont.)Univariate F-tests with (1,53) D. F.

    Variable Hypoth. SS Error SS Hypoth. MS Error MS F Sig. of F

    SPEED 132.01667 1097.08333 132.01667 20.69969 6.37771 .015

    ACCUR 6.01667 118.55000 6.01667 2.23679 2.68986 .107

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    40/44

    40

    Note that adjusting for the covariate results in a significant main effect for

    method of instruction; = 0.876, F(2, 52) = 3.69, p< .05. This effect was onlymarginally significant in the MANOVA; = 0.908, F(2, 53) = 2.69, p< .08.The effect size for a main effect or interaction (after covarying out sex),

    partial 2 , can be calculated through the formula: partial 2 = 1 = ()1/s, butit is easier just to repeat the analysis using GLM and reading partial 2 fromthe output.

    Assessing the Influence of the Independent Variables on the IndividualDependent Variables

    Once the MANOVA has identified significant main effects and/or interactions,

    the researcher will usually want to know which of the set of dependent

    variables is most affected by the independent variables. Most usually,

    researchers will look for significant univariate tests of these effects for each

    dependent variable using a Bonferroni adjustment so that the Type 1 error

    rate is not inflated. For a set of p dependent variables, this adjustment is:

    = 1(1 - 1)(1 - 2)(1 - 3).... (1 - p)However, this adjustment assumes that the effects obtained are independent

    of one another which is clearly not the case when the dependent variables are

    intercorrelated. Nevertheless, this is still the most common way of

    interpreting and reporting the results of a MANOVA and the one we will use

    in this class! Tabachnick and Fidell recommend that researchers give the

    pooled within-cell correlation matrix among the dependent variables if a

    researcher adopts this analysis strategy.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    41/44

    41

    Another way to overcome this problem is RoyBargmann stepdown analysis

    in which the researcher specifies a sequence of dependent variables in order of

    importance. Then, he or she conducts an ANOVA on the dependent variable

    of most importance, an ANCOVA adjusting for the effects of the first

    dependent variable on the second most important dependent variable, and so

    on. As the successive ANCOVAs are independent of one another, the

    Bonferroni correction is an accurate adjustment which controls the Type 1

    error rate. However, the limitation to using this analysis is that the researcher

    must be able to clearly specify the order in which to enter the dependent

    variables into the analysis. This is usually a hard, if not impossible task giventhe nature of current psychological theories.

    A third way to approach this problem is to use the loading matrix (raw

    discriminant function coefficients) from a Discriminant Function Analysis

    output which is obtained through SPSS MANOVA to identify those

    dependent variables that correlate highly with the linear combination of

    dependent variables (the discriminant function) that achieves the maximum

    separation of the groups specified by a main effect or interaction. The

    following syntax will give you this information.

    MANOVA speed accuracy BY method(1,2) intensit(1,3)/PRINT CELLINFO (ALL) ERROR SIGNIF HOMOGENEITY/DISCRIMINANT

    /DESIGN .

    However, it would be wise to read chapter 9 of Tabachnick and Fidell before

    attempting to interpret this output.

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    42/44

    42

    Appendix

    A Priori Power Analysis (MANOVA)

    Based upon an article by DAmico, Neilands, & Zambarano, 2001 cited in TF

    5th

    . edition (2007).

    Step 1:Create a dummy data set with 2 subjects in each condition such that

    the mean score in each condition on each dependent variable is the mean you

    anticipate getting if your hypothesis is correct (based upon past research

    findings). For example, consider the case where you want to compare the two

    instructional typing methods (1 = traditional versus 2 = motivational) and you

    expect Speed to average 32 for method 1 and 35 for method 2; accuracy to

    average 18 for method 1 and 20 for method 2 based upon past research, etc...

    Method Speed Accuracy1.00 31.00 17.00

    1.00 33.00 19.00

    2.00 34.00 19.00

    2.00 36.00 21.00

    Run a dummy MANOVA to obtain the data in a matrix form:

    MANOVA speed accuracy BY method(1,2)

    /matrix = out(*)

    /DESIGN = method.

    Rowtype Method Var Name Speed Accuracy

    N . 4 4

    MEAN 1.00 32 18

    N 1.00 2 2

    MEAN 2.00 35 20

    N 2.00 2 2

    STDDEV . 1.41 1.41

    CORR . speed 1.00 1.00CORR . accuracy 1.00 1.00

    Save this data matrix (as a SAV file).

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    43/44

    43

    Now change the values in the matrix to be the same as those obtained in past

    research studies. That is, change the standard deviations of the DVs and their

    inter-correlation(s).

    To obtain power estimates, do several runs with this adjusted matrix usingvarious cell sizes to find the cell size that gives you adequate power.

    Using the example in the lecture notes, the correlation between speed and

    accuracy is 0.40, and the overall SD for speed is 7.0 and for accuracy is 5.0.

    Therefore, with N = 60 the matrix is changed to:

    Rowtype Method Var Name Speed Accuracy

    N . 60 60MEAN 1.00 32 18

    N 1.00 30 30

    MEAN 2.00 35 20

    N 2.00 30 30

    STDDEV . 7.00 5.00

    CORR . speed 1.00 0.40

    CORR . accuracy 0.40 1.00

    The syntax to run MANOVA from a matrix data file is:

    MANOVA speed accuracy BY method(1,2)

    /matrix = in(*)

    /power = f(0.5) exact

    /DESIGN = method .

    The subcommand /power= f(0.5) exact tells the computer to calculate the

    power if the probability of a type I error is set at p< .05. The word exact

    means that the computer does an exact calculation of power rather than amuch quicker and rougher approximate calculation (the default)

  • 7/29/2019 TF7.MANOVA.ovhds.2013

    44/44

    44

    The Key Output from this run is:

    EFFECT .. METHOD

    Multivariate Tests of Significance (S = 1, M = 0, N = 27 1/2)

    Test Name Value Exact F Hypoth. DF Error DF Sig. of F

    Pillais .05979 1.81223 2.00 57.00 .173Hotellings .06359 1.81223 2.00 57.00 .173

    Wilks .94021 1.81223 2.00 57.00 .173Roys .05979

    Note.. F statistics are exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Observed Power at .5000 Level

    TEST NAME Noncent. Power

    (All) 3.624 .86

    This shows the power to detect the anticipated mean differences between

    methods using Speed and accuracy as the DVs is 0.86 if you use MANOVA.

    Increasing the n to 50 per cell, increases the power of this MANOVA analysis

    to 0.94 as shown below.

    EFFECT .. METHODMultivariate Tests of Significance (S = 1, M = 0, N = 47 1/2)

    Test Name Value Exact F Hypoth. DF Error DF Sig. of F

    Pillais .05902 3.04201 2.00 97.00 .052Hotellings .06272 3.04201 2.00 97.00 .052

    Wilks .94098 3.04201 2.00 97.00 .052Roys .05902

    Note.. F statistics are exact.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Observed Power at .5000 Level

    TEST NAME Noncent. Power

    (All) 6.084 .94