Examples
Example 1. A company that manufactures light bulbs claims that a particular type of light bulb will last 850 hours on average with standard deviation of 50. A consumer protection group thinks that the manufacturer has overestimated the lifespan of their light bulbs by about 40 hours. How many light bulbs does the consumer protection group have to test in order to prove their point with reasonable confidence?
Example 2. It has been estimated that the average height of American white male adults is 70 inches. It has also been postulated that there is a positive correlation between height and intelligence. If this is true, then the average height of a white male graduate students on campus should be greater than the average height of American white male adults in general. You want to test this theory out by random sampling a small group of white male graduate students. But you need to know how small the group can be or how few people that you need to measure such that you can still prove your point.
Prelude to The Power Analysis
For the power analysis below, we are going to focus on Example 1 testing the average lifespan of a light bulb. Our first goal is to figure out the number of light bulbs that need to be tested. That is, we will determine the sample size for a given a significance level and power. Next, we will reverse the process and determine the power, given the sample size and the significance level.
We know so far that the manufacturer claims that the average lifespan of the light bulb is 850 with the standard deviation of 50, and the consumer protection group believes that the manufacturer has overestimated by about 40 hours. So in terms of hypotheses, our null hypothesis is H_{0} = 850 and our alternative hypothesis is H_{a}= 810.
The significance level is the probability of a Type I error, that is the probability of rejecting H_{0} when it is actually true. We will set it at the .05 level. The power of the test against H_{a} is the probability of that the test rejects H_{0}. We will set it at .90 level.
We are almost ready for our power analysis. But let’s talk about the standard deviation a little bit. Intuitively, the number of light bulbs we need to test depends on the variability of the lifespan of these light bulbs. Take an extreme case where all the light bulbs have exactly the same lifespan. Then we just need to check a single light bulb to prove our point. Of course, this will never happen. On the other hand, suppose that some light bulbs last for 1000 hours and some only last 500 hours. We will have to select quite a few of light bulbs to cover all the ground. Therefore, the standard deviation for the distribution of the lifespan of the light bulbs will play an important role in determining the sample size.
Power Analysis
In Stata, it is fairly straightforward to perform a power analysis for comparing means. For example, we can use Stata’s power command for our calculation as shown below. First, we indicate that we have one mean. Next, we specify the two means, the mean for the null hypothesis and the mean for the alternative hypothesis. Then we specify the desired power (.9) and the standard deviation for the population. The default significance level (alpha level) is at .05, so we are not going to specify it.
power onemean 850 810, power(.9) sd(50) Performing iteration ... Estimated sample size for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 power = 0.9000 delta = 0.8000 m0 = 850.0000 ma = 810.0000 sd = 50.0000 Estimated sample size: N = 19
The result tells us that we need a sample size at least 19 light bulbs to reject H_{0} under the alternative hypothesis H_{a} to have a power of 0.9.
Next, suppose we have a sample of size 10. How much power do we have keeping all of the other numbers the same? We can use the same program, power, to calculate it.
power onemean 850 810, n(10) sd(50) Estimated power for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 N = 10 delta = 0.8000 m0 = 850.0000 ma = 810.0000 sd = 50.0000 Estimated power: power = 0.6162
You can see that the power is about .62 for a sample size of 10. What then is the power for sample size of 15?
power onemean 850 810, n(15) sd(50) Estimated power for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 N = 15 delta = 0.8000 m0 = 850.0000 ma = 810.0000 sd = 50.0000 Estimated power: power = 0.8213
So now the power is about .82. You could also do it again to find out the power for a sample size of 20. You’ll probably expect that the power will be greater.
power onemean 850 810, n(20) sd(50) Estimated power for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 N = 20 delta = 0.8000 m0 = 850.0000 ma = 810.0000 sd = 50.0000 Estimated power: power = 0.9239
We can also expect that if we specified a lower power or the standard deviation is smaller, then the sample size should also be smaller. We can experiment with different values of power and standard deviation as shown below.
power onemean 850 810, power(.8) sd(50) Performing iteration ... Estimated sample size for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 power = 0.8000 delta = 0.8000 m0 = 850.0000 ma = 810.0000 sd = 50.0000 Estimated sample size: N = 15
If the standard deviation is smaller, then the sample size should also go down, as we discussed before.
power onemean 850 810, power(.8) sd(30) Performing iteration ... Estimated sample size for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 power = 0.8000 delta = 1.3333 m0 = 850.0000 ma = 810.0000 sd = 30.0000 Estimated sample size: N = 7
Discussion
There is another technical assumption, the normality assumption. If the variable is not normally distributed, a small sample size usually will not have the power indicated in the results, because those results are calculated using the common method based on the normality assumption. It might not even be a good idea to do a ttest on such a small sample to begin with if the normality assumption is in question.
Here is another technical point. What we really need to know is the difference between the two means, not the individual values. In fact, what really matters is the difference of the means over the standard deviation. We call this the effect size. For example, we would get the same power if we subtracted 800 from each mean, changing 850 to 50 and 810 to 10.
power onemean 50 10, power(.9) sd(50) Performing iteration ... Estimated sample size for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 power = 0.9000 delta = 0.8000 m0 = 50.0000 ma = 10.0000 sd = 50.0000 Estimated sample size: N = 19
If we standardize our variable, we can calculate the means in terms of change in standard deviations.
power onemean 1 .2, power(.9) sd(1) Performing iteration ... Estimated sample size for a onesample mean test t test H0: m = m0 versus Ha: m != m0 Study parameters: alpha = 0.0500 power = 0.9000 delta = 0.8000 m0 = 1.0000 ma = 0.2000 sd = 1.0000 Estimated sample size: N = 19
It is usually not an easy task to determine the “true” effect size. We make our best guess based upon the existing literature, a pilot study or the smallest effect size of interest. A good estimate of the effect size is the key to a successful power analysis.
See Also

 Related Stata Commands
 power oneway — Sample size and power determination.
 References
 D. Moore and G. McCabe, Introduction to the Practice of Statistics, Third Edition, Section 6.4.
 Related Stata Commands