## Examples

Example 1. A company that manufactures light bulbs claims that a particular type of light bulb will last 850 hours on average with standard deviation of 50. A consumer protection group thinks that the manufacturer has overestimated the lifespan of their light bulbs by about 40 hours. How many light bulbs does the consumer protection group have to test in order to prove their point with reasonable confidence?

Example 2. It has been estimated that the average height of American white male adults is 70 inches. It has also been postulated that there is a positive correlation between height and intelligence. If this is true, then the average height of a white male graduate students on campus should be greater than the average height of American white male adults in general. You want to test this theory out by random sampling a small group of white male graduate students. But you need to know how small the group can be or how few people that you need to measure such that you can still prove your point.

## Prelude to The Power Analysis

For the power analysis below, we are going to focus on Example 1 testing the average lifespan of a light bulb. Our first goal is to figure out the number of light bulbs that need to be tested. That is, we will determine the sample size for a given a significance level and power. Next, we will reverse the process and determine the power, given the sample size and the significance level.

We know so far that the manufacturer claims that the average lifespan of the
light bulb is 850 with the standard deviation of 50, and the consumer protection
group believes that the manufactory has overestimated by about 40 hours. So
in terms of hypotheses, our null hypothesis is H_{0} = 850
and our alternative hypothesis is H_{a}= 810.

The significance level is the probability of a Type I error, that is the
probability of rejecting H_{0} when it is actually true. We will set it
at the .05 level. The power of the test against H_{a} is the probability of
that the test rejects H_{0}. We will set it at .90 level.

We are almost ready for our power analysis. But let’s talk about the standard deviation a little bit. Intuitively, the number of light bulbs we need to test depends on the variability of the lifespan of these light bulbs. Take an extreme case where all the light bulbs have exactly the same lifespan. Then we just need to check a single light bulb to prove our point. Of course, this will never happen. On the other hand, suppose that some light bulbs last for 1000 hours and some only last 500 hours. We will have to select quite a few of light bulbs to cover all the ground. Therefore, the standard deviation for the distribution of the lifespan of the light bulbs will play an important role in determining the sample size.

## Power Analysis

In Stata, it is fairly straightforward to perform a power analysis for
comparing means. For example, we can use Stata’s **sampsi** command for our
calculation as shown below. First, we specify the two means, the mean for the
null hypothesis and the mean for the alternative hypothesis. Then we specify the
standard deviation for the population. The default significance level (alpha
level) is at .05 so we are not going to specify it. The power is set to be .9.
Last, we tell Stata that we are performing a one-sample t-test.

sampsi 850 810, sd(50) power(.9) onesampEstimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.9000 alternative m = 810 sd = 50 Estimated required sample size: n = 17

The result tells us that we need a sample size at least 17
light bulbs to reject H_{0} under the alternative hypothesis H_{a} to
have a power of 0.9.

Next, suppose we have a sample of size 10, how much power do we have keeping all of the other
numbers the same? We can use the same program, **sampsi**, to calculate it.

sampsi 850 810, sd(50) n(10) onesampEstimated power for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) alternative m = 810 sd = 50 sample size n = 10 Estimated power: power = 0.7156

You can see that the power is about .72 for a sample size of 10. What then is the power for sample size of 15?

sampsi 850 810, sd(50) n(15) onesampEstimated power for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) alternative m = 810 sd = 50 sample size n = 15 Estimated power: power = 0.8725

So now the power is about .87. You could also do it again to find out the power for a sample size of 20. You’ll probably expect that the power will be greater.

sampsi 850 810, sd(50) n(20) onesampEstimated power for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) alternative m = 810 sd = 50 sample size n = 20 Estimated power: power = 0.9471

We can also expect that if we specified a lower power or the standard deviation is smaller, then the sample size should also be smaller. We can experiment with different values of power and standard deviation as shown below.

sampsi 850 810, sd(50) power(.8) onesampEstimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.8000 alternative m = 810 sd = 50 Estimated required sample size: n = 13

If the standard deviation is lower, then the sample size should also go down, as we discussed before.

sampsi 850 810, sd(30) power(.8) onesampEstimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 850, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.8000 alternative m = 810 sd = 30 Estimated required sample size: n = 5

## Discussion

There is another technical assumption, the normality assumption. If the variable is not normally distributed, a small sample size usually will not have the power indicated in the results, because those results are calculated using the common method based on the normality assumption. It might not even be a good idea to do a t-test on such a small sample to begin with if the normality assumption is in question.

Here is another technical point. What we really need to know is the difference between the two means, not the individual values. In fact, what really matters is the difference of the means over the standard deviation. We call this the effect size. For example, we would get the same power if we subtracted 800 from each mean, changing 850 to 50 and 810 to 10.

sampsi 50 10, sd(50) power(.9) onesampEstimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 50, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.9000 alternative m = 10 sd = 50 Estimated required sample size: n = 17

If we standardize our variable, we can calculate the means in terms of change in standard deviations.

sampsi 1 .2, sd(1) power(.9) onesampEstimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 1, where m is the mean in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.9000 alternative m = .2 sd = 1 Estimated required sample size: n = 17

It is usually not an easy task to determine the “true” effect size. We make our best guess based upon the existing literature or a pilot study. A good estimate of the effect size is the key to a successful power analysis.

## See Also

- Related Stata Commands
**sampsi**— Sample size and power determination.

- References

- D. Moore and G. McCabe, Introduction to the Practice
of Statistics, Third Edition, Section 6.4.