Table 6.1 on page 192
use https://stats.idre.ucla.edu/stat/stata/examples/alda/data/wages_pp
clist id lnw exper ged postexp if inlist(id,206,2365,4384), noobs
id lnw exper ged postexp
206 2.028 1.874 0 0
206 2.297 2.814 0 0
206 2.482 4.314 0 0
2365 1.782 .66 0 0
2365 1.763 1.679 0 0
2365 1.71 2.737 0 0
2365 1.736 3.679 0 0
2365 2.192 4.679 1 0
2365 2.042 5.718 1 1.038
2365 2.32 6.718 1 2.038
2365 2.665 7.872 1 3.192
2365 2.418 9.083 1 4.404
2365 2.389 10.045 1 5.365
2365 2.485 11.122 1 6.442
2365 2.445 12.045 1 7.365
4384 2.859 .096 0 0
4384 1.532 1.039 0 0
4384 1.59 1.726 1 0
4384 1.969 3.128 1 1.402
4384 1.684 4.282 1 2.556
4384 2.625 5.724 1 3.998
4384 2.583 6.024 1 4.298
Table 6.2, page 203
* First create these interaction terms
generate experBYblack = exper * black
generate gedBYexper = ged * exper
* Model A: EXPER, HGC-9, BLACK*EXPER, UE-7
mixed lnw exper hgc_9 experBYblack ue_7 || id: exper , cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -2415.3186
Iteration 1: log likelihood = -2415.2596
Iteration 2: log likelihood = -2415.2595
Computing standard errors:
Mixed-effects ML regression Number of obs = 6402
Group variable: id Number of groups = 888
Obs per group: min = 1
avg = 7.2
max = 13
Wald chi2(4) = 488.69
Log likelihood = -2415.2595 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
lnw | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0440539 .0026034 16.92 0.000 .0389513 .0491564
hgc_9 | .040011 .0063627 6.29 0.000 .0275403 .0524816
experBYblack | -.0181832 .0044837 -4.06 0.000 -.0269711 -.0093953
ue_7 | -.0119504 .0017916 -6.67 0.000 -.015462 -.0084389
_cons | 1.748989 .0113993 153.43 0.000 1.726646 1.771331
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0016317 .0002126 .001264 .0021064
var(_cons) | .0506369 .0048085 .0420374 .0609955
cov(exper,_cons) | -.0029129 .0008386 -.0045565 -.0012693
-----------------------------+------------------------------------------------
var(Residual) | .0947952 .0019382 .0910714 .0986711
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(3) = 1423.34 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
estat ic
------------------------------------------------------------------------------
Model | Obs ll(null) ll(model) df AIC BIC
-------------+----------------------------------------------------------------
| 6402 . -2415.26 9 4848.519 4909.398
------------------------------------------------------------------------------
di -2*e(ll)
4830.519
* Model B: A + GED as fixed and random effect
mixed lnw exper hgc_9 experBYblack ue_7 ged || id: exper ged, cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -2406.136
Iteration 1: log likelihood = -2403.4121
Iteration 2: log likelihood = -2402.7948
Iteration 3: log likelihood = -2402.7589
Iteration 4: log likelihood = -2402.7588
Computing standard errors:
Mixed-effects ML regression Number of obs = 6402
Group variable: id Number of groups = 888
Obs per group: min = 1
avg = 7.2
max = 13
Wald chi2(5) = 504.47
Log likelihood = -2402.7588 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
lnw | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0432238 .002621 16.49 0.000 .0380867 .0483609
hgc_9 | .0383335 .0062651 6.12 0.000 .026054 .0506129
experBYblack | -.0181999 .0044704 -4.07 0.000 -.0269616 -.0094381
ue_7 | -.0116087 .0017875 -6.49 0.000 -.0151122 -.0081052
ged | .0613145 .0184483 3.32 0.001 .0251565 .0974726
_cons | 1.734215 .0117994 146.97 0.000 1.711088 1.757341
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0016605 .000219 .0012823 .0021503
var(ged) | .0282355 .0160348 .0092769 .0859387
var(_cons) | .0436049 .004872 .0350293 .05428
cov(exper,ged) | -.0021802 .0012771 -.0046832 .0003228
cov(exper,_cons) | -.0026171 .0008502 -.0042835 -.0009507
cov(ged,_cons) | .0023415 .0080749 -.0134851 .018168
-----------------------------+------------------------------------------------
var(Residual) | .0941633 .0019354 .0904453 .0980341
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(6) = 1410.01 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
* Model C: Model B without random effect of GED
mixed lnw exper hgc_9 experBYblack ue_7 ged || id: exper, cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2409.217
Iteration 1: Log likelihood = -2409.1621
Iteration 2: Log likelihood = -2409.1621
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(5) = 502.92
Log likelihood = -2409.1621 Prob > chi2 = 0.0000
----------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
exper | .0433271 .0026083 16.61 0.000 .0382149 .0484393
hgc_9 | .0390425 .006334 6.16 0.000 .0266282 .0514568
experBYblack | -.0185228 .0044603 -4.15 0.000 -.0272647 -.0097808
ue_7 | -.0115933 .0017926 -6.47 0.000 -.0151068 -.0080798
ged | .059123 .016867 3.51 0.000 .0260642 .0921818
_cons | 1.734305 .012134 142.93 0.000 1.710523 1.758087
----------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0016349 .0002121 .0012677 .0021083
var(_cons) | .0505774 .0048068 .0419816 .0609332
cov(exper,_cons) | -.0030369 .0008407 -.0046847 -.0013892
-----------------------------+------------------------------------------------
var(Residual) | .0947379 .0019368 .0910169 .0986109
------------------------------------------------------------------------------
LR test vs. linear model: chi2(3) = 1397.20 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model D: A + POSTEXP as fixed and random effect
mixed lnw exper hgc_9 experBYblack ue_7 postexp || id: exper postexp , cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2413.5415
Iteration 1: Log likelihood = -2409.1633
Iteration 2: Log likelihood = -2408.7298
Iteration 3: Log likelihood = -2408.6897
Iteration 4: Log likelihood = -2408.6887
Iteration 5: Log likelihood = -2408.6887
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(5) = 503.11
Log likelihood = -2408.6887 Prob > chi2 = 0.0000
------------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-------------------+----------------------------------------------------------------
exper | .0406518 .0027773 14.64 0.000 .0352084 .0460953
hgc_9 | .0398777 .0063539 6.28 0.000 .0274243 .0523311
experBYblack | -.0194934 .0044745 -4.36 0.000 -.0282632 -.0107235
ue_7 | -.0118397 .0017906 -6.61 0.000 -.0153492 -.0083302
postexp | .0145948 .0045644 3.20 0.001 .0056487 .0235409
_cons | 1.749368 .011399 153.47 0.000 1.727027 1.77171
------------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0014484 .0002288 .0010627 .001974
var(postexp) | .0008801 .0014554 .0000344 .0224989
var(_cons) | .0505578 .0048115 .0419546 .060925
cov(exper,postexp) | -.0000499 .0007411 -.0015024 .0014026
cov(exper,_cons) | -.0024537 .000891 -.0042001 -.0007074
cov(postexp,_cons) | -.0020079 .0014201 -.0047911 .0007754
-----------------------------+------------------------------------------------
var(Residual) | .0946398 .0019373 .0909179 .0985141
------------------------------------------------------------------------------
LR test vs. linear model: chi2(6) = 1390.91 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model E: Model D without random effect of POSTEXP
mixed lnw exper hgc_9 experBYblack ue_7 postexp || id: exper, cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2410.4112
Iteration 1: Log likelihood = -2410.3532
Iteration 2: Log likelihood = -2410.3532
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(5) = 503.58
Log likelihood = -2410.3532 Prob > chi2 = 0.0000
----------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
exper | .0405052 .0028287 14.32 0.000 .0349611 .0460493
hgc_9 | .0395349 .0063336 6.24 0.000 .0271213 .0519484
experBYblack | -.0191777 .0044529 -4.31 0.000 -.0279051 -.0104502
ue_7 | -.0118476 .0017908 -6.62 0.000 -.0153576 -.0083376
postexp | .0139616 .0044229 3.16 0.002 .0052928 .0226304
_cons | 1.749888 .0114112 153.35 0.000 1.727522 1.772253
----------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0016125 .0002108 .0012481 .0020833
var(_cons) | .0508545 .0048173 .0422375 .0612296
cov(exper,_cons) | -.0030438 .0008392 -.0046886 -.0013989
-----------------------------+------------------------------------------------
var(Residual) | .0948344 .0019386 .0911099 .0987113
------------------------------------------------------------------------------
LR test vs. linear model: chi2(3) = 1387.58 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model F: Model A with fixed and random effects of GED and POSTEXP
mixed lnw exper hgc_9 experBYblack ue_7 ged postexp || id: exper ged postexp , cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2403.6331
Iteration 1: Log likelihood = -2396.0811
Iteration 2: Log likelihood = -2395.4619
Iteration 3: Log likelihood = -2394.8356
Iteration 4: Log likelihood = -2394.7314
Iteration 5: Log likelihood = -2394.6959
Iteration 6: Log likelihood = -2394.682
Iteration 7: Log likelihood = -2394.6781
Iteration 8: Log likelihood = -2394.6772
Iteration 9: Log likelihood = -2394.677
Iteration 10: Log likelihood = -2394.677
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(6) = 512.64
Log likelihood = -2394.677 Prob > chi2 = 0.0000
------------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-------------------+----------------------------------------------------------------
exper | .0414715 .0027969 14.83 0.000 .0359896 .0469534
hgc_9 | .0390293 .0062428 6.25 0.000 .0267935 .051265
experBYblack | -.0196198 .0044702 -4.39 0.000 -.0283812 -.0108583
ue_7 | -.011724 .0017828 -6.58 0.000 -.0152183 -.0082298
ged | .0408746 .0219893 1.86 0.063 -.0022237 .0839728
postexp | .0094226 .005545 1.70 0.089 -.0014454 .0202905
_cons | 1.738574 .0119418 145.59 0.000 1.715168 1.761979
------------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0013602 .0002172 .0009947 .00186
var(ged) | .0163064 .0176288 .0019594 .1357028
var(postexp) | .0033547 .0024014 .0008248 .0136448
var(_cons) | .0413233 .0047467 .0329929 .0517571
cov(exper,ged) | .0029296 .0040987 -.0051037 .010963
cov(exper,postexp) | -.0009119 .0012055 -.0032746 .0014509
cov(exper,_cons) | -.0017026 .0008261 -.0033218 -.0000834
cov(ged,postexp) | -.0039077 .0048779 -.0134682 .0056528
cov(ged,_cons) | .011967 .0096504 -.0069474 .0308813
cov(postexp,_cons) | -.0060473 .0028754 -.011683 -.0004116
-----------------------------+------------------------------------------------
var(Residual) | .0938735 .0019334 .0901596 .0977404
------------------------------------------------------------------------------
LR test vs. linear model: chi2(10) = 1416.42 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model G: Model F without random effect of POSTEXP
mixed lnw exper hgc_9 experBYblack ue_7 ged postexp || id: exper ged, cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2404.754
Iteration 1: Log likelihood = -2401.513
Iteration 2: Log likelihood = -2401.3452
Iteration 3: Log likelihood = -2401.3442
Iteration 4: Log likelihood = -2401.3442
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(6) = 510.45
Log likelihood = -2401.3442 Prob > chi2 = 0.0000
----------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
exper | .041169 .002884 14.27 0.000 .0355164 .0468216
hgc_9 | .0383089 .0062634 6.12 0.000 .0260329 .0505849
experBYblack | -.018706 .0044699 -4.18 0.000 -.0274668 -.0099451
ue_7 | -.011635 .0017874 -6.51 0.000 -.0151383 -.0081318
ged | .0430652 .0213603 2.02 0.044 .0011998 .0849307
postexp | .0086629 .0051199 1.69 0.091 -.0013719 .0186976
_cons | 1.738931 .0121126 143.56 0.000 1.715191 1.762671
----------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0016507 .0002181 .0012741 .0021387
var(ged) | .028497 .0159503 .009514 .0853559
var(_cons) | .0434916 .0048584 .0349398 .0541367
cov(exper,ged) | -.0023478 .0012747 -.0048461 .0001505
cov(exper,_cons) | -.0025785 .0008468 -.0042382 -.0009187
cov(ged,_cons) | .0025346 .0080415 -.0132264 .0182955
-----------------------------+------------------------------------------------
var(Residual) | .0941735 .0019355 .0904553 .0980445
------------------------------------------------------------------------------
LR test vs. linear model: chi2(6) = 1403.08 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model H: Model F without random effect of GED
mixed lnw exper hgc_9 experBYblack ue_7 ged postexp || id: exper postexp, cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2411.2521
Iteration 1: Log likelihood = -2406.789
Iteration 2: Log likelihood = -2406.367
Iteration 3: Log likelihood = -2406.3211
Iteration 4: Log likelihood = -2406.3196
Iteration 5: Log likelihood = -2406.3196
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(6) = 506.95
Log likelihood = -2406.3196 Prob > chi2 = 0.0000
------------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-------------------+----------------------------------------------------------------
exper | .0414689 .0028036 14.79 0.000 .0359739 .0469638
hgc_9 | .0393508 .0063509 6.20 0.000 .0269033 .0517984
experBYblack | -.0193504 .0044769 -4.32 0.000 -.028125 -.0105757
ue_7 | -.0116235 .0017922 -6.49 0.000 -.0151362 -.0081108
ged | .0425147 .0194866 2.18 0.029 .0043217 .0807077
postexp | .0085538 .0053292 1.61 0.108 -.0018913 .0189988
_cons | 1.738572 .0124205 139.98 0.000 1.714229 1.762916
------------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0014519 .000229 .0010658 .0019779
var(postexp) | .0007572 .0014844 .0000162 .0353116
var(_cons) | .0503678 .0048 .0417863 .0607116
cov(exper,postexp) | 5.04e-06 .0007562 -.001477 .0014871
cov(exper,_cons) | -.0024686 .0008901 -.0042132 -.0007241
cov(postexp,_cons) | -.0019191 .0014173 -.0046969 .0008587
-----------------------------+------------------------------------------------
var(Residual) | .0945791 .0019359 .0908599 .0984506
------------------------------------------------------------------------------
LR test vs. linear model: chi2(6) = 1393.13 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
* Model I: Model A with GED and GED*EXPER as fixed and random effects
* Note error message Stata gives, standard error calculation failed
mixed lnw exper hgc_9 exper*black ue_7 ged gedBYexper || id: exper ged gedBYexper, cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2406.5791
Iteration 1: Log likelihood = -2399.8962
Iteration 2: Log likelihood = -2397.1101
Iteration 3: Log likelihood = -2396.8418
Iteration 4: Log likelihood = -2396.7558
Iteration 5: Log likelihood = -2396.754
Hessian is not negative semidefinite
r(430);
* Model J: Model I without random effect of GED*EXPER
mixed lnw exper hgc_9 exper*black ue_7 ged gedBYexper || id: exper ged , cov(un)
Performing EM optimization ...
Performing gradient-based optimization:
Iteration 0: Log likelihood = -2405.5264
Iteration 1: Log likelihood = -2402.4512
Iteration 2: Log likelihood = -2402.3015
Iteration 3: Log likelihood = -2402.3007
Iteration 4: Log likelihood = -2402.3007
Computing standard errors ...
Mixed-effects ML regression Number of obs = 6,402
Group variable: id Number of groups = 888
Obs per group:
min = 1
avg = 7.2
max = 13
Wald chi2(6) = 506.33
Log likelihood = -2402.3007 Prob > chi2 = 0.0000
----------------------------------------------------------------------------------
lnw | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
exper | .041881 .0029716 14.09 0.000 .0360568 .0477053
hgc_9 | .0382744 .0062638 6.11 0.000 .0259975 .0505513
experBYblack | -.0183285 .0044672 -4.10 0.000 -.027084 -.009573
ue_7 | -.0116265 .0017876 -6.50 0.000 -.0151301 -.0081228
ged | .0457071 .0246971 1.85 0.064 -.0026982 .0941124
gedBYexper | .0048722 .0050637 0.96 0.336 -.0050524 .0147968
_cons | 1.737771 .012387 140.29 0.000 1.713493 1.762049
----------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects parameters | Estimate Std. err. [95% conf. interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .001655 .0002186 .0012775 .0021439
var(ged) | .0295583 .0162064 .010092 .086573
var(_cons) | .0436256 .0048717 .03505 .0542995
cov(exper,ged) | -.0022167 .0012751 -.0047159 .0002826
cov(exper,_cons) | -.0026077 .0008494 -.0042724 -.0009429
cov(ged,_cons) | .0016765 .0081424 -.0142822 .0176353
-----------------------------+------------------------------------------------
var(Residual) | .0941587 .0019351 .0904412 .0980289
------------------------------------------------------------------------------
LR test vs. linear model: chi2(6) = 1407.21 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
Table 6.3 on page 205
* Table 6.3: Model F (with discontinuities in elevation and slope)
mixed lnw exper hgc_9 experBYblack ue_7 ged postexp || id: exper ged postexp , cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -2403.6331
Iteration 1: log likelihood = -2396.7774
Iteration 2: log likelihood = -2395.373
Iteration 3: log likelihood = -2394.942
Iteration 4: log likelihood = -2394.7786
Iteration 5: log likelihood = -2394.7328
Iteration 6: log likelihood = -2394.6983
Iteration 7: log likelihood = -2394.6863
Iteration 8: log likelihood = -2394.6809
Iteration 9: log likelihood = -2394.6782
Iteration 10: log likelihood = -2394.6778
Iteration 11: log likelihood = -2394.6771
Iteration 12: log likelihood = -2394.677
Iteration 13: log likelihood = -2394.677
Computing standard errors:
Mixed-effects ML regression Number of obs = 6402
Group variable: id Number of groups = 888
Obs per group: min = 1
avg = 7.2
max = 13
Wald chi2(6) = 512.64
Log likelihood = -2394.677 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
lnw | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
exper | .0414715 .0027969 14.83 0.000 .0359896 .0469534
hgc_9 | .0390293 .0062428 6.25 0.000 .0267936 .0512649
experBYblack | -.0196198 .0044702 -4.39 0.000 -.0283812 -.0108584
ue_7 | -.011724 .0017828 -6.58 0.000 -.0152183 -.0082298
ged | .0408748 .0219893 1.86 0.063 -.0022234 .0839731
postexp | .0094225 .005545 1.70 0.089 -.0014454 .0202904
_cons | 1.738574 .0119418 145.59 0.000 1.715168 1.761979
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(exper) | .0013602 .0002172 .0009947 .00186
var(ged) | .0163072 .0176204 .0019617 .1355584
var(postexp) | .0033547 .0024021 .0008245 .0136502
var(_cons) | .0413234 .0047467 .0329929 .0517573
cov(exper,ged) | .0029301 .0041001 -.0051059 .0109661
cov(exper,postexp) | -.0009119 .0012058 -.0032753 .0014515
cov(exper,_cons) | -.0017028 .0008261 -.003322 -.0000836
cov(ged,postexp) | -.0039081 .0048797 -.0134722 .005656
cov(ged,_cons) | .0119658 .0096486 -.0069451 .0308766
cov(postexp,_cons) | -.006047 .0028756 -.011683 -.000411
-----------------------------+------------------------------------------------
var(Residual) | .0938736 .0019334 .0901597 .0977406
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(10) = 1416.42 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
Table 6.5 on page 221
* Use data file and generate variables used in models
use https://stats.idre.ucla.edu/stat/stata/examples/alda/data/external_pp
generate time2 = time^2
generate time3 = time^3
* Model A: no change
mixed external || id:
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -1005.1265
Iteration 1: log likelihood = -1005.1265
Computing standard errors:
Mixed-effects ML regression Number of obs = 270
Group variable: id Number of groups = 45
Obs per group: min = 6
avg = 6.0
max = 6
Wald chi2(0) = .
Log likelihood = -1005.1265 Prob > chi2 = .
------------------------------------------------------------------------------
external | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons | 12.96296 1.484126 8.73 0.000 10.05413 15.8718
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Identity |
var(_cons) | 87.4179 20.92509 54.68265 139.7498
-----------------------------+------------------------------------------------
var(Residual) | 70.20296 6.618798 58.3584 84.45152
------------------------------------------------------------------------------
LR test vs. linear regression: chibar2(01) = 122.23 Prob >= chibar2 = 0.0000
* Model B: linear change
mixed external time || id: time, cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -995.87223
Iteration 1: log likelihood = -995.87223
Computing standard errors:
Mixed-effects ML regression Number of obs = 270
Group variable: id Number of groups = 45
Obs per group: min = 6
avg = 6.0
max = 6
Wald chi2(1) = 0.10
Log likelihood = -995.87223 Prob > chi2 = 0.7528
------------------------------------------------------------------------------
external | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
time | -.1307937 .4153307 -0.31 0.753 -.9448268 .6832395
_cons | 13.28995 1.835831 7.24 0.000 9.691785 16.88811
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(time) | 4.692881 1.668154 2.338124 9.41915
var(_cons) | 123.5244 32.11053 74.21304 205.6009
cov(time,_cons) | -12.5379 5.991178 -24.28039 -.7954051
-----------------------------+------------------------------------------------
var(Residual) | 53.718 5.662374 43.69133 66.04567
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(3) = 140.65 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
* Model C: Quadratic change
mixed external time time2 || id: time time2, cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -988.87346 (not concave)
Iteration 1: log likelihood = -988.49724
Iteration 2: log likelihood = -987.97681
Iteration 3: log likelihood = -987.91833
Iteration 4: log likelihood = -987.91822
Iteration 5: log likelihood = -987.91822
Computing standard errors:
Mixed-effects ML regression Number of obs = 270
Group variable: id Number of groups = 45
Obs per group: min = 6
avg = 6.0
max = 6
Wald chi2(2) = 1.12
Log likelihood = -987.91822 Prob > chi2 = 0.5703
------------------------------------------------------------------------------
external | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
time | -1.150635 1.106775 -1.04 0.299 -3.319874 1.018604
time2 | .2039683 .2280452 0.89 0.371 -.2429922 .6509287
_cons | 13.96984 1.773708 7.88 0.000 10.49344 17.44625
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(time) | 24.60966 12.20024 9.313704 65.02627
var(time2) | 1.215646 .5119956 .5324825 2.775294
var(_cons) | 107.0853 30.14047 61.68064 185.9135
cov(time,time2) | -4.96374 2.413652 -9.694411 -.2330688
cov(time,_cons) | -3.690436 14.16042 -31.44434 24.06347
cov(time2,_cons) | -1.361766 2.774842 -6.800357 4.076825
-----------------------------+------------------------------------------------
var(Residual) | 41.98364 5.110119 33.07307 53.29491
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(6) = 156.11 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
* Model D: Cubic change
* NOTE error message saying standard error calculation failed
mixed external time time2 time3 || id: time time2 time3, cov(un)
Performing EM optimization:
Performing gradient-based optimization:
Iteration 0: log likelihood = -988.77213 (not concave)
Iteration 1: log likelihood = -988.36 (not concave)
Iteration 2: log likelihood = -988.23513 (not concave)
Iteration 3: log likelihood = -988.14158 (not concave)
Iteration 4: log likelihood = -987.76892 (not concave)
Iteration 5: log likelihood = -987.46259 (not concave)
Iteration 6: log likelihood = -987.2561 (not concave)
Iteration 7: log likelihood = -987.06939 (not concave)
Iteration 8: log likelihood = -986.87288 (not concave)
Iteration 9: log likelihood = -986.69027 (not concave)
Iteration 10: log likelihood = -986.50381 (not concave)
Iteration 11: log likelihood = -986.30598 (not concave)
Iteration 12: log likelihood = -986.10993 (not concave)
Iteration 13: log likelihood = -985.91443 (not concave)
Iteration 14: log likelihood = -985.85769 (not concave)
Iteration 15: log likelihood = -985.62638 (not concave)
Iteration 16: log likelihood = -985.4271 (not concave)
Iteration 17: log likelihood = -985.24116 (not concave)
Iteration 18: log likelihood = -985.0688 (not concave)
Iteration 19: log likelihood = -984.89875
Iteration 20: log likelihood = -983.94456
Iteration 21: log likelihood = -983.87592
Iteration 22: log likelihood = -983.71285
Iteration 23: log likelihood = -983.6908
Iteration 24: log likelihood = -983.68213
Iteration 25: log likelihood = -983.67943
Iteration 26: log likelihood = -983.67866
Iteration 27: log likelihood = -983.67841
Iteration 28: log likelihood = -983.67837
Iteration 29: log likelihood = -983.67836 (not concave)
Iteration 30: log likelihood = -983.67836
Computing standard errors:
standard error calculation failed
Mixed-effects ML regression Number of obs = 270
Group variable: id Number of groups = 45
Obs per group: min = 6
avg = 6.0
max = 6
Wald chi2(3) = 1.64
Log likelihood = -983.67836 Prob > chi2 = 0.6515
------------------------------------------------------------------------------
external | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
time | -.3500588 2.327935 -0.15 0.880 -4.912727 4.212609
time2 | -.2343034 1.059327 -0.22 0.825 -2.310547 1.84194
time3 | .0584362 .1300309 0.45 0.653 -.1964196 .313292
_cons | 13.79453 1.915966 7.20 0.000 10.03931 17.54976
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
id: Unstructured |
var(time) | 106.824 . . .
var(time2) | 16.65179 . . .
var(time3) | .1771647 . . .
var(_cons) | 128.869 . . .
cov(time,time2) | -41.13055 . . .
cov(time,time3) | 4.084077 . . .
cov(time,_cons) | -56.23649 . . .
cov(time2,time3) | -1.68473 . . .
cov(time2,_cons) | 24.61511 . . .
cov(time3,_cons) | -3.260058 . . .
-----------------------------+------------------------------------------------
var(Residual) | 37.82353 . . .
------------------------------------------------------------------------------
LR test vs. linear regression: chi2(10) = 164.53 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference
Figure 6.8 on page 227
use https://stats.idre.ucla.edu/stat/stata/examples/alda/data/foxngeese_pp, clear graph twoway (scatter nmoves game) if inlist(id,1,4,6,7,8,11,12,15), by(id, cols(4))
Table 6.6 on page 231
* Skipped for now.

