Using the manova command along with transformations of the dependent variables will allow you to perform multivariate repeated measures analyses.
Example 1
The first example is a within-subjects design, also known as a randomized block design. There are four observations for each subject, labeled y1, y2, y3 and y4.
clear input s y1 y2 y3 y4 1 3 4 4 3 2 2 4 4 5 3 2 3 3 6 4 3 3 3 5 5 1 2 4 7 6 3 3 6 6 7 4 4 5 10 8 6 5 5 8 end
We need to create a variable to use as a constant and then run the manova with the noconstant option.
generate con = 1 manova y1 y2 y3 y4 = con, noconstant Number of obs = 8 W = Wilks' lambda L = Lawley-Hotelling trace P = Pillai's trace R = Roy's largest root Source | Statistic df F(df1, df2) = F Prob>F -----------+-------------------------------------------------- con | W 0.0196 1 4.0 4.0 49.92 0.0011 e | P 0.9804 4.0 4.0 49.92 0.0011 e | L 49.9217 4.0 4.0 49.92 0.0011 e | R 49.9217 4.0 4.0 49.92 0.0011 e |-------------------------------------------------- Residual | 7 -----------+-------------------------------------------------- Total | 8 -------------------------------------------------------------- e = exact, a = approximate, u = upper bound on F
Next, we need to create contrasts among the dependent variables. If there are k dependent variables, we will create k-1 contrasts. These contrasts are created in a manner similar to that which is done with categorical predictors. We will create the contrasts using effect coding.
mat ycomp = (1,0,0,-1,1,0,-1,0,1,-1) mat list ycomp ycomp[3,4] c1 c2 c3 c4 r1 1 0 0 -1 r2 0 1 0 -1 r3 0 0 1 -1 manovatest con, ytrans(ycomp) Transformations of the dependent variables (1) y1 - y4 (2) y2 - y4 (3) y3 - y4 W = Wilks' lambda L = Lawley-Hotelling trace P = Pillai's trace R = Roy's largest root Source | Statistic df F(df1, df2) = F Prob>F -----------+-------------------------------------------------- con | W 0.2458 1 3.0 5.0 5.11 0.0554 e | P 0.7542 3.0 5.0 5.11 0.0554 e | L 3.0682 3.0 5.0 5.11 0.0554 e | R 3.0682 3.0 5.0 5.11 0.0554 e |-------------------------------------------------- Residual | 7 ---------------------------------------------------------
The F-test of 5.11 is the multivariate test of the within-subjects treatment. The result is not significant at the .05 level.
Example 2
This example will include one between-subjects factor with two levels. The design could be classified as a split-plot factorial.
clear input s a y1 y2 y3 y4 1 1 3 4 7 7 2 1 6 5 8 8 3 1 3 4 7 9 4 1 3 3 6 8 5 2 1 2 5 10 6 2 2 3 6 10 7 2 2 4 5 9 8 2 2 3 6 11 end
The first manova is a test of the between-subjects factor.
manova y1 y2 y3 y4 = a Number of obs = 8 W = Wilks' lambda L = Lawley-Hotelling trace P = Pillai's trace R = Roy's largest root Source | Statistic df F(df1, df2) = F Prob>F -----------+-------------------------------------------------- a | W 0.1374 1 4.0 3.0 4.71 0.1169 e | P 0.8626 4.0 3.0 4.71 0.1169 e | L 6.2764 4.0 3.0 4.71 0.1169 e | R 6.2764 4.0 3.0 4.71 0.1169 e |-------------------------------------------------- Residual | 6 -----------+-------------------------------------------------- Total | 7 -------------------------------------------------------------- e = exact, a = approximate, u = upper bound on
The between-subjects factor is not significant. Next, we code the contrasts among the dependent variables and test for the a*y interaction (between-subject*within-subjects) interaction.
mat ymat = (1,0,0,-1,1,0,-1,0,1,-1) mat list ymat ymat[3,4] c1 c2 c3 c4 r1 1 0 0 -1 r2 0 1 0 -1 r3 0 0 1 -1 /* test of the a*y interaction */ manovatest a, ytransform(ymat) Transformations of the dependent variables (1) y1 - y4 (2) y2 - y4 (3) y3 - y4 W = Wilks' lambda L = Lawley-Hotelling trace P = Pillai's trace R = Roy's largest root Source | Statistic df F(df1, df2) = F Prob>F -----------+-------------------------------------------------- a | W 0.1443 1 3.0 4.0 7.91 0.0371 e | P 0.8557 3.0 4.0 7.91 0.0371 e | L 5.9296 3.0 4.0 7.91 0.0371 e | R 5.9296 3.0 4.0 7.91 0.0371 e |-------------------------------------------------- Residual | 6 -------------------------------------------------------------- e = exact, a = approximate, u = upper bound on F
Even though the interaction is significant, we will go ahead and test the effect of the within-subjects variable. To do this we will create a contrast for the predictor variables, such that, the levels of each variable sums to one.
/* test of y */ mat xmat = (1, .5, .5) mat list xmat xmat[1,3] c1 c2 c3 r1 1 .5 .5 manovatest, test(xmat) ytransform(ymat) Transformations of the dependent variables (1) y1 - y4 (2) y2 - y4 (3) y3 - y4 Test constraint (1) _cons + .5 a[1] + .5 a[2] = 0 W = Wilks' lambda L = Lawley-Hotelling trace P = Pillai's trace R = Roy's largest root Source | Statistic df F(df1, df2) = F Prob>F -----------+-------------------------------------------------- manovatest | W 0.0275 1 3.0 4.0 47.19 0.0014 e | P 0.9725 3.0 4.0 47.19 0.0014 e | L 35.3944 3.0 4.0 47.19 0.0014 e | R 35.3944 3.0 4.0 47.19 0.0014 e |-------------------------------------------------- Residual | 6 -------------------------------------------------------------
The test of the within-subjects factor is also significant, although care must be taken in interpreting this result due to the significant interaction effect.