Chi-squared difference tests are frequently used to test differences between nested models in confirmatory factor analysis, path analysis, and structural equation modeling. Nested models are two models (or more if one is fitting a series of models) that are identical except that one of the models constrains some of the parameters (the null model) and one does not have those constraints (the alternative model). Examples of this include overall tests for a set of dichotomous predictors representing a single nominal (categorical), or a test for differences across groups in a multiple group model. Typically performing a chi-squared difference test involves calculating the difference between the chi-squared statistic for the null and alternative models, the resulting test statistic is distributed chi-squared with degrees of freedom equal to the difference in the degrees of freedom between the two models. However, when a model is run in Mplus using the MLMV or WLSMV estimators, the following warning message is displayed (as of Mplus 8.2), warning the user that the standard chi-squared difference test is not valid:

* The chi-square value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used for chi-square difference testing in the regular way. MLM, MLR and WLSM chi-square difference testing is described on the Mplus website. MLMV, WLSMV, and ULSMV difference testing is done using the DIFFTEST option. The DIFFTEST option assumes the models are nested. The NESTED option can be used to verify that the models are nested.

If you are using the MLMV or WLSMV estimators you can use the **difftest** command to test for differences in model fit. Alternatively, you could use the **model test **command to calculate a Wald test for an equivalent null hypothesis. For more information on Wald tests you can see our FAQ page:
How are the likelihood ratio, Wald, and Lagrange multiplier (score) tests different and/or similar? Note that while the **difftest **option is only available for the MLMV and WLSMV estimators, the **model test** command can be used with a variety of models.
If you are unsure of what estimator is being used in your model, you can find the estimator listed towards the top of the output file. Also keep in mind that this type of test is only valid if the models are nested, that is, the models must be the same except that one of the models includes additional constraints on the parameters.

Below we show an example of how to use the **difftest** command to test nested models. In this simple example, the model contains four variables. The outcome variable **y** has three ordered categories, and the three predictor variables, **x1**, **x2** and **x3** could be either continuous or binary (0/1). For this example, we will test whether that including the variables **x2** and **x3** as predictors of **y** significantly improves the fit of the model (i.e., that the two variables together are statistically significant). When the outcome
variable is an ordered category, the default estimator is WLSMV so we will need to use **difftest** to test this hypothesis.

The first step is to run the full or unconstrained model (also denoted H1). Below is the Mplus input file where **x1**, **x2**, and **x3** are used to predict **y**. We have used the categorical option of the variables command to inform Mplus that the variable **y** is ordinal (the same option is used to identify binary outcome variables). We have used the **difftest **option of the **savedata** command to save the information necessary for our chi-squared difference test to the file **mydiff.dat**. The example dataset can be downloaded here.

data: file is hsb2.dat; variable: names are id y x1 x2 x3;categorical are y;model: y on x1 x2 x3;savedata: difftest is mydiff.dat;

Once we have run the unconstrained model (above), we can then run the constrained
or null model model (also denoted H0).
In the model below we have used the **difftest** option, this time in the **
analysis** command, to tell Mplus we wish to compare the current model to the
model saved in the file **mydiff.dat**. Recall that we want to test the null
hypothesis that **x2**, and **x3** used together to predict **y** do
not significantly improve the fit of the model. To test this hypothesis we fix
the coefficients for the variables **x2**, and **x3** to zero using **@0**.

data: file is hsb2.dat; variable: names are id y x1 x2 x3; categorical are y;analysis: difftest = mydiff.dat;model:y on x1 x2@0 x3@0;

In the output, immediately after the chi-square test of model fit, Mplus
prints the results of the chi-square difference test (i.e., **difftest**).
The resulting chi-square value of 3.528 with 2 degrees of freedom results in
a p-value of 0.1714, thus we cannot reject the null hypothesis that
including **x2**, and **x3** as predictors of **y** does not
significantly improve model fit. Note that since we used a saturated model for H1 which has a chi-square value of zero, the chi-square difference between H1 and H0 is the same as the chi-square test for H0.

MODEL FIT INFORMATION Number of Free Parameters 2 Chi-Square Test of Model Fit Value 3.528* Degrees of Freedom 2 P-Value 0.1714 Chi-Square Test for Difference Testing Value 3.528* Degrees of Freedom 2 P-Value 0.1714 * The chi-square value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used for chi-square difference testing in the regular way. MLM, MLR and WLSM chi-square difference testing is described on the Mplus website. MLMV, WLSMV, and ULSMV difference testing is done using the DIFFTEST option. The DIFFTEST option assumes the models are nested. The NESTED option can be used to verify that the models are nested.