Example
A researcher is trying to develop a new, less expensive, test to detect a particular chemical in soil samples. The old test correlates to the criterion (i.e., gold standard measurement) at r = 0.89. The new test correlates to the criterion at r = 0.76. Two research assistants are asked to collect soil samples to compare the old test with the new test. The first research assistant collects 42 independent soil samples; the second research assistant collects 47 samples. Each runs a power analysis to determine the observed power.
Next, the research assistants are asked to calculate the number of independent samples necessary to detect a difference between 0.89 and 0.76 for power values of .7, .8 and .9.
Prelude to the Power Analysis
There are two different aspects of power analysis. One is to calculate the observed power for a specified sample size as in the first part of the example. The other aspect is to calculate the necessary sample size when given a specific power as in the second part of the example. Technically, power is the probability of rejecting the null hypothesis when the specific alternative hypothesis is true. For all examples, we will assume alpha = 0.05.
Power Analysis
In SPSS, it is fairly straightforward to perform power analysis for comparing correlations. For example, we can use SPSS’s power procedure for our calculation as shown below. We use the pearson keyword and that we have one sample (onesample). Next we use the parameters subcommand with several options. We first specify the test = nondirectional option to indicate we want a two-tailed test. Next, we indicate that we want to use the adjust_bias = true option. We provide number of samples collected by the first research assistant (42), and the null and alternative correlations (0.89 and 0.76, respectively). The first research assistant collected 42 soil samples, so we specify 42 in the n option. We will run the analysis a second time with the value of 47 on the n option, the number of samples the second research assistant collected.
power pearson onesample /parameters test = nondirectional adjust_bias = true n = 42 null = 0.89 alternative = 0.76. Power Analysis Table Powerb Test Assumptions N Null Alternative Sig. Pearson Correlationa 0.761 42 0.89 0.76 0.05 a Two-sided test. b Based on Fisher's z-transformation and normal approximation with bias adjustment. power pearson onesample /parameters test = nondirectional adjust_bias = true n = 47 null = 0.89 alternative = 0.76. Power Analysis Table Powerb Test Assumptions N Null Alternative Sig. Pearson Correlationa 0.809 47 0.89 0.76 0.05 a Two-sided test. b Based on Fisher's z-transformation and normal approximation with bias adjustment.
We can see that the first research assistant observed a power of approximately 0.76, while the second research assistant observed a power of approximately 0.81.
Now the research assistants need to calculate the necessary sample sizes for power values of .7, .8 and .9. In SPSS, we could run three separate power analyses, or we could run one analysis and get the results for all three levels of power in a single table. Let’s do that.
power pearson onesample /parameters test = nondirectional adjust_bias = true null = 0.89 alternative = 0.76 power = .7 to .9 by .1. Power Analysis Table Test Assumptions Pearson Correlationa N Actual Powerb Power Null Alternative Sig. 1 37 0.703 0.700 0.89 0.76 0.05 2 46 0.800 0.800 0.89 0.76 0.05 3 61 0.902 0.900 0.89 0.76 0.05 a Two-sided test. b Based on Fisher's z-transformation and normal approximation with bias adjustment.
The required sample size for a power of .7 is 37. The required sample size for a power of .8 is 46, and the required sample size for a power of .9 is 61. This makes sense, because as power increases, the sample size must increase, assuming that alpha and the effect size are held constant.
Power Analysis Table Pearson Correlationa N Actual Powerb Test Assumptions Power Null Alternative Sig. 1 37 0.703 0.700 0.89 0.76 0.05 2 46 0.800 0.800 0.89 0.76 0.05 3 61 0.902 0.900 0.89 0.76 0.05 a Two-sided test. b Based on Fisher's z-transformation and normal approximation with bias adjustment.