9.3 Probability Distribution Needed for Hypothesis Testing

Earlier in the course, we discussed sampling distributions. Particular distributions are associated with various types of hypothesis testing.

The following table summarizes various hypothesis tests and corresponding probability distributions that will be used to conduct the test (based on the assumptions shown below):

Type of Hypothesis Test Population Parameter Estimated value (point estimate) Probability Distribution Used
Hypothesis test for the mean, when the population standard deviation is known Population mean Sample mean Normal distribution,
Hypothesis test for the mean, when the population standard deviation is unknown and the distribution of the sample mean is approximately normal Population mean Sample mean Student’s t-distribution,
Hypothesis test for proportions Population proportion Sample proportion Normal distribution,

Assumptions

When you perform a hypothesis test of a single population mean μ using a normal distribution (often called a z-test), you take a simple random sample from the population. The population you are testing is normally distributed , or your sample size is sufficiently large. You know the value of the population standard deviation , which, in reality, is rarely known.

When you perform a hypothesis test of a single population mean μ using a Student's t-distribution (often called a t -test), there are fundamental assumptions that need to be met in order for the test to work properly. Your data should be a simple random sample that comes from a population that is approximately normally distributed. You use the sample standard deviation to approximate the population standard deviation. (Note that if the sample size is sufficiently large, a t -test will work even if the population is not approximately normally distributed).

When you perform a hypothesis test of a single population proportion p , you take a simple random sample from the population. You must meet the conditions for a binomial distribution : there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p . The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np and nq must both be greater than five ( n p > 5   n p > 5   and n q > 5   n q > 5   ). Then the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with μ = p   μ = p   and σ = p q n σ = p q n . Remember that q = 1 - p q q = 1 - p q .

Hypothesis Test for the Mean

Going back to the standardizing formula we can derive the test statistic for testing hypotheses concerning means.

The standardizing formula cannot be solved as it is because we do not have μ, the population mean. However, if we substitute in the hypothesized value of the mean, μ 0 in the formula as above, we can compute a Z value. This is the test statistic for a test of hypothesis for a mean and is presented in Figure 9.3 . We interpret this Z value as the associated probability that a sample with a sample mean of X ¯ X ¯ could have come from a distribution with a population mean of H 0 and we call this Z value Z c for “calculated”. Figure 9.3 and Figure 9.4 show this process.

In Figure 9.3 two of the three possible outcomes are presented. X ¯ 1 X ¯ 1 and X ¯ 3 X ¯ 3 are in the tails of the hypothesized distribution of H 0 . Notice that the horizontal axis in the top panel is labeled X ¯ X ¯ 's. This is the same theoretical distribution of X ¯ X ¯ 's, the sampling distribution, that the Central Limit Theorem tells us is normally distributed. This is why we can draw it with this shape. The horizontal axis of the bottom panel is labeled Z and is the standard normal distribution. Z α 2 Z α 2 and -Z α 2 -Z α 2 , called the critical values , are marked on the bottom panel as the Z values associated with the probability the analyst has set as the level of significance in the test, (α). The probabilities in the tails of both panels are, therefore, the same.

Notice that for each X ¯ X ¯ there is an associated Z c , called the calculated Z, that comes from solving the equation above. This calculated Z is nothing more than the number of standard deviations that the hypothesized mean is from the sample mean. If the sample mean falls "too many" standard deviations from the hypothesized mean we conclude that the sample mean could not have come from the distribution with the hypothesized mean, given our pre-set required level of significance. It could have come from H 0 , but it is deemed just too unlikely. In Figure 9.3 both X ¯ 1 X ¯ 1 and X ¯ 3 X ¯ 3 are in the tails of the distribution. They are deemed "too far" from the hypothesized value of the mean given the chosen level of alpha. If in fact this sample mean it did come from H 0 , but from in the tail, we have made a Type I error: we have rejected a good null. Our only real comfort is that we know the probability of making such an error, α, and we can control the size of α.

Figure 9.4 shows the third possibility for the location of the sample mean, x _ x _ . Here the sample mean is within the two critical values. That is, within the probability of (1-α) and we cannot reject the null hypothesis.

This gives us the decision rule for testing a hypothesis for a two-tailed test:

Decision rule: two-tail test
If < : then do not REJECT
If > : then REJECT

This rule will always be the same no matter what hypothesis we are testing or what formulas we are using to make the test. The only change will be to change the Z c to the appropriate symbol for the test statistic for the parameter being tested. Stating the decision rule another way: if the sample mean is unlikely to have come from the distribution with the hypothesized mean we cannot accept the null hypothesis. Here we define "unlikely" as having a probability less than alpha of occurring.

P-Value Approach

An alternative decision rule can be developed by calculating the probability that a sample mean could be found that would give a test statistic larger than the test statistic found from the current sample data assuming that the null hypothesis is true. Here the notion of "likely" and "unlikely" is defined by the probability of drawing a sample with a mean from a population with the hypothesized mean that is either larger or smaller than that found in the sample data. Simply stated, the p-value approach compares the desired significance level, α, to the p-value which is the probability of drawing a sample mean further from the hypothesized value than the actual sample mean. A large p -value calculated from the data indicates that we should not reject the null hypothesis . The smaller the p -value, the more unlikely the outcome, and the stronger the evidence is against the null hypothesis. We would reject the null hypothesis if the evidence is strongly against it. The relationship between the decision rule of comparing the calculated test statistics, Z c , and the Critical Value, Z α , and using the p -value can be seen in Figure 9.5 .

The calculated value of the test statistic is Z c in this example and is marked on the bottom graph of the standard normal distribution because it is a Z value. In this case the calculated value is in the tail and thus we cannot accept the null hypothesis, the associated X ¯ X ¯ is just too unusually large to believe that it came from the distribution with a mean of µ 0 with a significance level of α.

If we use the p -value decision rule we need one more step. We need to find in the standard normal table the probability associated with the calculated test statistic, Z c . We then compare that to the α associated with our selected level of confidence. In Figure 9.5 we see that the p -value is less than α and therefore we cannot accept the null. We know that the p -value is less than α because the area under the p-value is smaller than α/2. It is important to note that two researchers drawing randomly from the same population may find two different P-values from their samples. This occurs because the P-value is calculated as the probability in the tail beyond the sample mean assuming that the null hypothesis is correct. Because the sample means will in all likelihood be different this will create two different P-values. Nevertheless, the conclusions as to the null hypothesis should be different with only the level of probability of α.

Here is a systematic way to make a decision of whether you cannot accept or cannot reject a null hypothesis if using the p -value and a preset or preconceived α (the " significance level "). A preset α is the probability of a Type I error (rejecting the null hypothesis when the null hypothesis is true). It may or may not be given to you at the beginning of the problem. In any case, the value of α is the decision of the analyst. When you make a decision to reject or not reject H 0 , do as follows:

  • If α > p -value, cannot accept H 0 . The results of the sample data are significant. There is sufficient evidence to conclude that H 0 is an incorrect belief and that the alternative hypothesis , H a , may be correct.
  • If α ≤ p -value, cannot reject H 0 . The results of the sample data are not significant. There is not sufficient evidence to conclude that the alternative hypothesis, H a , may be correct. In this case the status quo stands.
  • When you "cannot reject H 0 ", it does not mean that you should believe that H 0 is true. It simply means that the sample data have failed to provide sufficient evidence to cast serious doubt about the truthfulness of H 0 . Remember that the null is the status quo and it takes high probability to overthrow the status quo. This bias in favor of the null hypothesis is what gives rise to the statement "tyranny of the status quo" when discussing hypothesis testing and the scientific method.

Both decision rules will result in the same decision and it is a matter of preference which one is used.

One and Two-tailed Tests

The discussion of Figure 9.3 - Figure 9.5 was based on the null and alternative hypothesis presented in Figure 9.3 . This was called a two-tailed test because the alternative hypothesis allowed that the mean could have come from a population which was either larger or smaller than the hypothesized mean in the null hypothesis. This could be seen by the statement of the alternative hypothesis as μ ≠ 100, in this example.

It may be that the analyst has no concern about the value being "too" high or "too" low from the hypothesized value. If this is the case, it becomes a one-tailed test and all of the alpha probability is placed in just one tail and not split into α/2 as in the above case of a two-tailed test. Any test of a claim will be a one-tailed test. For example, a car manufacturer claims that their Model 17B provides gas mileage of greater than 25 miles per gallon. The null and alternative hypothesis would be:

  • H 0 : µ ≤ 25
  • H a : µ > 25

The claim would be in the alternative hypothesis. The burden of proof in hypothesis testing is carried in the alternative. This is because failing to reject the null, the status quo, must be accomplished with 90 or 95 percent confidence that it cannot be maintained. Said another way, we want to have only a 5 or 10 percent probability of making a Type I error, rejecting a good null; overthrowing the status quo.

This is a one-tailed test and all of the alpha probability is placed in just one tail and not split into α/2 as in the above case of a two-tailed test.

Figure 9.6 shows the two possible cases and the form of the null and alternative hypothesis that give rise to them.

where μ 0 is the hypothesized value of the population mean.

Sample size Test statistic
< 30
(σ unknown)
< 30
(σ known)
> 30
(σ unknown)
> 30
(σ known)

Effects of Sample Size on Test Statistic

In developing the confidence intervals for the mean from a sample, we found that most often we would not have the population standard deviation, σ. If the sample size were less than 30, we could simply substitute the point estimate for σ, the sample standard deviation, s, and use the student's t -distribution to correct for this lack of information.

When testing hypotheses we are faced with this same problem and the solution is exactly the same. Namely: If the population standard deviation is unknown, and the sample size is less than 30, substitute s, the point estimate for the population standard deviation, σ, in the formula for the test statistic and use the student's t -distribution. All the formulas and figures above are unchanged except for this substitution and changing the Z distribution to the student's t -distribution on the graph. Remember that the student's t -distribution can only be computed knowing the proper degrees of freedom for the problem. In this case, the degrees of freedom is computed as before with confidence intervals: df = (n-1). The calculated t-value is compared to the t-value associated with the pre-set level of confidence required in the test, t α , df found in the student's t tables. If we do not know σ, but the sample size is 30 or more, we simply substitute s for σ and use the normal distribution.

Table 9.5 summarizes these rules.

A Systematic Approach for Testing a Hypothesis

A systematic approach to hypothesis testing follows the following steps and in this order. This template will work for all hypotheses that you will ever test.

  • Set up the null and alternative hypothesis. This is typically the hardest part of the process. Here the question being asked is reviewed. What parameter is being tested, a mean, a proportion, differences in means, etc. Is this a one-tailed test or two-tailed test?

Decide the level of significance required for this particular case and determine the critical value. These can be found in the appropriate statistical table. The levels of confidence typical for businesses are 80, 90, 95, 98, and 99. However, the level of significance is a policy decision and should be based upon the risk of making a Type I error, rejecting a good null. Consider the consequences of making a Type I error.

Next, on the basis of the hypotheses and sample size, select the appropriate test statistic and find the relevant critical value: Z α , t α , etc. Drawing the relevant probability distribution and marking the critical value is always big help. Be sure to match the graph with the hypothesis, especially if it is a one-tailed test.

  • Take a sample(s) and calculate the relevant parameters: sample mean, standard deviation, or proportion. Using the formula for the test statistic from above in step 2, now calculate the test statistic for this particular case using the parameters you have just calculated.
  • The test statistic is in the tail: Cannot Accept the null, the probability that this sample mean (proportion) came from the hypothesized distribution is too small to believe that it is the real home of these sample data.
  • The test statistic is not in the tail: Cannot Reject the null, the sample data are compatible with the hypothesized population parameter.
  • Reach a conclusion. It is best to articulate the conclusion two different ways. First a formal statistical conclusion such as “With a 5 % level of significance we cannot accept the null hypotheses that the population mean is equal to XX (units of measurement)”. The second statement of the conclusion is less formal and states the action, or lack of action, required. If the formal conclusion was that above, then the informal one might be, “The machine is broken and we need to shut it down and call for repairs”.

All hypotheses tested will go through this same process. The only changes are the relevant formulas and those are determined by the hypothesis required to answer the original question.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introductory-business-statistics-2e/pages/1-introduction
  • Authors: Alexander Holmes, Barbara Illowsky, Susan Dean
  • Publisher/website: OpenStax
  • Book title: Introductory Business Statistics 2e
  • Publication date: Dec 13, 2023
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introductory-business-statistics-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/introductory-business-statistics-2e/pages/9-3-probability-distribution-needed-for-hypothesis-testing

© Jul 18, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Module 9: Hypothesis Testing With One Sample

Distribution needed for hypothesis testing, learning outcomes.

  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation known
  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation unknown

Earlier in the course, we discussed sampling distributions.  Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student’s t- distribution . (Remember, use a Student’s t -distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.) We perform tests of a population proportion using a normal distribution (usually n is large or the sample size is large).

If you are testing a  single population mean , the distribution for the test is for means :

[latex]\displaystyle\overline{{X}}\text{~}{N}{\left(\mu_{{X}}\text{ , }\frac{{\sigma_{{X}}}}{\sqrt{{n}}}\right)}{\quad\text{or}\quad}{t}_{{{d}{f}}}[/latex]

The population parameter is [latex]\mu[/latex]. The estimated value (point estimate) for [latex]\mu[/latex] is [latex]\displaystyle\overline{{x}}[/latex], the sample mean.

If you are testing a  single population proportion , the distribution for the test is for proportions or percentages:

[latex]\displaystyle{P}^{\prime}\text{~}{N}{\left({p}\text{ , }\sqrt{{\frac{{{p}{q}}}{{n}}}}\right)}[/latex]

The population parameter is [latex]p[/latex]. The estimated value (point estimate) for [latex]p[/latex] is p′ . [latex]\displaystyle{p}\prime=\frac{{x}}{{n}}[/latex] where [latex]x[/latex] is the number of successes and [latex]n[/latex] is the sample size.

Assumptions

When you perform a  hypothesis test of a single population mean μ using a Student’s t -distribution (often called a t-test), there are fundamental assumptions that need to be met in order for the test to work properly. Your data should be a simple random sample that comes from a population that is approximately normally distributed . You use the sample standard deviation to approximate the population standard deviation. (Note that if the sample size is sufficiently large, a t-test will work even if the population is not approximately normally distributed).

When you perform a  hypothesis test of a single population mean μ using a normal distribution (often called a z -test), you take a simple random sample from the population. The population you are testing is normally distributed or your sample size is sufficiently large. You know the value of the population standard deviation which, in reality, is rarely known.

When you perform a  hypothesis test of a single population proportion p , you take a simple random sample from the population. You must meet the conditions for a binomial distribution which are as follows: there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p . The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np  and nq must both be greater than five ( np > 5 and nq > 5). Then the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with μ = p and [latex]\displaystyle\sigma=\sqrt{{\frac{{{p}{q}}}{{n}}}}[/latex] . Remember that q = 1 – p .

Concept Review

In order for a hypothesis test’s results to be generalized to a population, certain requirements must be satisfied.

When testing for a single population mean:

  • A Student’s t -test should be used if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with an unknown standard deviation.
  • The normal test will work if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with a known standard deviation.

When testing a single population proportion use a normal test for a single population proportion if the data comes from a simple, random sample, fill the requirements for a binomial distribution, and the mean number of success and the mean number of failures satisfy the conditions:  np > 5 and nq > n where n is the sample size, p is the probability of a success, and q is the probability of a failure.

Formula Review

If there is no given preconceived  α , then use α = 0.05.

Types of Hypothesis Tests

  • Single population mean, known population variance (or standard deviation): Normal test .
  • Single population mean, unknown population variance (or standard deviation): Student’s t -test .
  • Single population proportion: Normal test .
  • For a single population mean , we may use a normal distribution with the following mean and standard deviation. Means: [latex]\displaystyle\mu=\mu_{{\overline{{x}}}}{\quad\text{and}\quad}\sigma_{{\overline{{x}}}}=\frac{{\sigma_{{x}}}}{\sqrt{{n}}}[/latex]
  • A single population proportion , we may use a normal distribution with the following mean and standard deviation. Proportions: [latex]\displaystyle\mu={p}{\quad\text{and}\quad}\sigma=\sqrt{{\frac{{{p}{q}}}{{n}}}}[/latex].
  • Distribution Needed for Hypothesis Testing. Provided by : OpenStax. Located at : . License : CC BY: Attribution
  • Introductory Statistics . Authored by : Barbara Illowski, Susan Dean. Provided by : Open Stax. Located at : http://cnx.org/contents/[email protected] . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/contents/[email protected]

Section 2: Hypothesis Testing

mercury glass thermometer

In the previous section, we developed statistical methods, primarily in the form of confidence intervals, for answering the question "what is the value of the parameter \(\theta\)?" In this section, we'll learn how to answer a slightly different question, namely "is the value of the parameter \(\theta\) such and such?" For example, rather than attempting to estimate \(\mu\), the mean body temperature of adults, we might be interested in testing whether \(\mu\), the mean body temperature of adults, is really 37 degrees Celsius. We'll attempt to answer such questions using a statistical method known as hypothesis testing .

We'll derive good hypothesis tests for the usual population parameters, including:

  • a population mean \(\mu\)
  • the difference in two population means, \(\mu_1-\mu_2\), say
  • a population variance \(\sigma^2\)
  • the ratio of two population variances, \(\dfrac{\sigma^2_1}{\sigma^2_2}\), say
  • a population proportion \(p\)
  • the difference in two population proportions, \(p_1-p_2\), say
  • three (or more!) means, \(\mu_1, \mu_2\), and \(\mu_3\), say

We'll also work on deriving good hypothesis tests for the slope parameter \(\beta\) of a least-squares regression line through a set of \((x,y)\) data points, as well as the corresponding population correlation coefficient \(\rho\).

Lesson 9: Tests About Proportions

We'll start our exploration of hypothesis tests by focusing on population proportions. Specifically, we'll derive the methods used for testing:

  • whether a single population proportion \(p\) equals a particular value, \(p_0\)
  • whether the difference in two population proportions \(p_1-p_2\) equals a particular value \(p_0\), say, with the most common value being 0

Thereby allowing us to test whether two populations' proportions are equal. Along the way, we'll learn two different approaches to hypothesis testing, one being the critical value approach and one being the \(p\)-value approach.

9.1 - The Basic Idea

Every time we perform a hypothesis test, this is the basic procedure that we will follow:

  • We'll make an initial assumption about the population parameter.
  • We'll collect evidence or else use somebody else's evidence (in either case, our evidence will come in the form of data).
  • Based on the available evidence (data), we'll decide whether to " reject " or " not reject " our initial assumption.

Let's try to make this outlined procedure more concrete by taking a look at the following example.

Example 9-1

assorted dice

A four-sided (tetrahedral) die is tossed 1000 times, and 290 fours are observed. Is there evidence to conclude that the die is biased, that is, say, that more fours than expected are observed?

As the basic hypothesis testing procedure outlines above, the first step involves stating an initial assumption. It is:

Assume the die is unbiased. If the die is unbiased, then each side (1, 2, 3, and 4) is equally likely. So, we'll assume that p , the probability of getting a 4 is 0.25.

In general, the initial assumption is called the null hypothesis , and is denoted \(H_0\). (That's a zero in the subscript for "null"). In statistical notation, we write the initial assumption as:

\(H_0 \colon p=0.25\)

That is, the initial assumption involves making a statement about a population proportion.

Now, the second step tells us that we need to collect evidence (data) for or against our initial assumption. In this case, that's already been done for us. We were told that the die was tossed \(n=1000\) times, and \(y=290\) fours were observed. Using statistical notation again, we write the collected evidence as a sample proportion:

\(\hat{p}=\dfrac{y}{n}=\dfrac{290}{1000}=0.29\)

Now we just need to complete the third step of making the decision about whether or not to reject our initial assumption that the population proportion is 0.25. Recall that the Central Limit Theorem tells us that the sample proportion:

\(\hat{p}=\dfrac{Y}{n}\)

is approximately normally distributed with (assumed) mean:

\(p_0=0.25\)

and (assumed) standard deviation:

\(\sqrt{\dfrac{p_0(1-p_0)}{n}}=\sqrt{\dfrac{0.25(0.75)}{1000}}=0.01369\)

That means that:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}\)

follows a standard normal \(N(0,1)\) distribution. So, we can "translate" our observed sample proportion of 0.290 onto the \(Z\) scale. Here's a picture that summarizes the situation:

So, we are assuming that the population proportion is 0.25 (in blue ), but we've observed a sample proportion 0.290 (in red ) that falls way out in the right tail of the normal distribution. It certainly doesn't appear impossible to obtain a sample proportion of 0.29. But, that's what we're left with deciding. That is, we have to decide if a sample proportion of 0.290 is more extreme that we'd expect if the population proportion \(p\) does indeed equal 0.25.

There are two approaches to making the decision:

  • one is called the " critical value " (or " critical region " or " rejection region ") approach
  • and the other is called the "\(p\) -value " approach

Until we get to the page in this lesson titled The \(p\)-value Approach, we'll use the critical value approach.

Example (continued)

Okay, so now let's think about it. We probably wouldn't reject our initial assumption that the population proportion \(p=0.25\) if our observed sample proportion were 0.255. And, we might still not be inclined to reject our initial assumption that the population proportion \(p=0.25\) if our observed sample proportion were 0.27. On the other hand, we would almost certainly want to reject our initial assumption that the population proportion \(p=0.25\) if our observed sample proportion were 0.35. That suggests, then, that there is some "threshold" value that once we "cross" the threshold value, we are inclined to reject our initial assumption. That is the critical value approach in a nutshell. That is, critical value approach tells us to define a threshold value, called a " critical value " so that if our " test statistic " is more extreme than the critical value, then we reject the null hypothesis.

Let's suppose that we decide to reject the null hypothesis \(H_0:p=0.25\) in favor of the " alternative hypothesis " \(H_A \colon p>0.25\) if:

\(\hat{p}>0.273\) or equivalently if \(Z>1.645\)

Here's a picture of such a " critical region " (or " rejection region "):

Note, by the way, that the "size" of the critical region is 0.05. This will become apparent in a bit when we talk below about the possible errors that we can make whenever we conduct a hypothesis test.

At any rate, let's get back to deciding whether our particular sample proportion appears to be too extreme. Well, it looks like we should reject the null hypothesis (our initial assumption \(p=0.25\)) because:

\(\hat{p}=0.29>0.273\)

or equivalently since our test statistic:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}=\dfrac{0.29-0.25}{\sqrt{\dfrac{0.25(0.75)}{1000}}}=2.92\)

is greater than 1.645.

Our conclusion: we say there is sufficient evidence to conclude \(H_A:p>0.25\), that is, that the die is biased.

By the way, this example involves what is called a one-tailed test , or more specifically, a right-tailed test , because the critical region falls in only one of the two tails of the normal distribution, namely the right tail.

Before we continue on the next page at looking at two more examples, let's revisit the basic hypothesis testing procedure that we outlined above. This time, though, let's state the procedure in terms of performing a hypothesis test for a population proportion using the critical value approach . The basic procedure is:

  • State the null hypothesis \(H_0\) and the alternative hypothesis \(H_A\). (By the way, some textbooks, including ours, use the notation \(H_1\) instead of \(H_A\) to denote the alternative hypothesis.)

Calculate the test statistic :

Determine the critical region .

Make a decision . Determine if the test statistic falls in the critical region. If it does, reject the null hypothesis. If it does not, do not reject the null hypothesis.

Now, back to those possible errors we can make when conducting such a hypothesis test.

Possible Errors

So, argh! Every time we conduct a hypothesis test, we have a chance of making an error. (Oh dear, why couldn't I have chosen a different profession?!)

If we reject the null hypothesis \(H_0\) (in favor of the alternative hypothesis \(H_A\)) when the null hypothesis is in fact true, we say we've committed a Type I error . For our example above, we set P (Type I error) equal to 0.05:

Aha! That's why the 0.05! We wanted to minimize our chance of making a Type I error! In general, we denote \(\alpha=P(\text{Type I error})=\) the " significance level of the test ." Obviously, we want to minimize \(\alpha\). Therefore, typical \(\alpha\) values are 0.01, 0.05, and 0.10.

If we fail to reject the null hypothesis when the null hypothesis is false, we say we've committed a Type II error . For our example, suppose (unknown to us) that the population proportion \(p\) is actually 0.27. Then, the probability of a Type II error, in this case, is:

\(P(\text{Type II Error})=P(\hat{p}<0.273\quad if \quad p=0.27)=P\left(Z<\dfrac{0.273-0.27}{\sqrt{\dfrac{0.27(0.73)}{1000}}}\right)=P(Z<0.214)=0.5847\)

In general, we denote \(\beta=P(\text{Type II error})\). Just as we want to minimize \(\alpha=P(\text{Type I error})\), we want to minimize \(\beta=P(\text{Type II error})\). Typical \(\beta\) values are 0.05, 0.10, and 0.20.

9.2 - More Examples

Let's take a look at two more examples of a hypothesis test for a single proportion while recalling the hypothesis testing procedure we outlined on the previous page:

State the null hypothesis \(H_0\) and the alternative hypothesis \(H_{A}\).

The first example involves a hypothesis test for the proportion in which the alternative hypothesis is a "greater than hypothesis," that is, the alternative hypothesis is of the form \(H_A \colon p > p_0\). And, the second example involves a hypothesis test for the proportion in which the alternative hypothesis is a "less than hypothesis," that is, the alternative hypothesis is of the form \(H_A \colon p < p_0\).

Example 9-2

seat belt

Let p equal the proportion of drivers who use a seat belt in a state that does not have a mandatory seat belt law. It was claimed that \(p = 0.14\). An advertising campaign was conducted to increase this proportion. Two months after the campaign, \(y = 104\) out of a random sample of \(n = 590\) drivers were wearing seat belts. Was the campaign successful?

The observed sample proportion is:

\(\hat{p}=\dfrac{104}{590}=0.176\)

Because it is claimed that \(p = 0.14\), the null hypothesis is:

\(H_0 \colon p = 0.14\)

Because we're interested in seeing if the advertising campaign was successful, that is, that a greater proportion of people wear seat belts, the alternative hypothesis is:

\(H_A \colon p > 0.14\)

The test statistic is therefore:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}=\dfrac{0.176-0.14}{\sqrt{\dfrac{0.14(0.86)}{590}}}=2.52\)

If we use a significance level of \(\alpha = 0.01\), then the critical region is:

That is, we reject the null hypothesis if the test statistic \(Z > 2.326\). Because the test statistic falls in the critical region, that is, because \(Z = 2.52 > 2.326\), we can reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the \(\alpha = 0.01\) level to conclude the campaign was successful (\(p > 0.14\)).

Again, note that this is an example of a right-tailed hypothesis test because the action falls in the right tail of the normal distribution.

Example 9-3

very happy man

A Gallup poll released on October 13, 2000, found that 47% of the 1052 U.S. adults surveyed classified themselves as "very happy" when given the choices of:

  • "very happy"
  • "fairly happy"
  • "not too happy"

Suppose that a journalist who is a pessimist took advantage of this poll to write a headline titled "Poll finds that U.S. adults who are very happy are in the minority." Is the pessimistic journalist's headline warranted?

The sample proportion is:

\(\hat{p}=0.47\)

Because we're interested in the majority/minority boundary line, the null hypothesis is:

\(H_0 \colon p = 0.50\)

Because the journalist claims that the proportion of very happy U.S. adults is a minority, that is, less than 0.50, the alternative hypothesis is:

\(H_A \colon p < 0.50\)

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}=\dfrac{0.47-0.50}{\sqrt{\dfrac{0.50(0.50)}{1052}}}=-1.946\)

Now, this time, we need to put our critical region in the left tail of the normal distribution. If we use a significance level of \(\alpha = 0.05\), then the critical region is:

That is, we reject the null hypothesis if the test statistic \(Z < −1.645\). Because the test statistic falls in the critical region, that is, because \(Z = −1.946 < −1.645\), we can reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the \(\alpha = 0.05\) level to conclude that \(p < 0.50\), that is, U.S. adults who are very happy are in the minority. The journalist's pessimism appears to be indeed warranted.

Note that this is an example of a lef t-tailed hypothesis test because the action falls in the left tail of the normal distribution.

9.3 - The P-Value Approach

Example 9-4.

x-ray of someone with lung cancer

Up until now, we have used the critical region approach in conducting our hypothesis tests. Now, let's take a look at an example in which we use what is called the P -value approach .

Among patients with lung cancer, usually, 90% or more die within three years. As a result of new forms of treatment, it is felt that this rate has been reduced. In a recent study of n = 150 lung cancer patients, y = 128 died within three years. Is there sufficient evidence at the \(\alpha = 0.05\) level, say, to conclude that the death rate due to lung cancer has been reduced?

\(\hat{p}=\dfrac{128}{150}=0.853\)

The null and alternative hypotheses are:

\(H_0 \colon p = 0.90\) and \(H_A \colon p < 0.90\)

The test statistic is, therefore:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}=\dfrac{0.853-0.90}{\sqrt{\dfrac{0.90(0.10)}{150}}}=-1.92\)

And, the rejection region is:

Since the test statistic Z = −1.92 < −1.645, we reject the null hypothesis. There is sufficient evidence at the \(\alpha = 0.05\) level to conclude that the rate has been reduced.

Example 9-4 (continued)

What if we set the significance level \(\alpha\) = P (Type I Error) to 0.01? Is there still sufficient evidence to conclude that the death rate due to lung cancer has been reduced?

In this case, with \(\alpha = 0.01\), the rejection region is Z ≤ −2.33. That is, we reject if the test statistic falls in the rejection region defined by Z ≤ −2.33:

Because the test statistic Z = −1.92 > −2.33, we do not reject the null hypothesis. There is insufficient evidence at the \(\alpha = 0.01\) level to conclude that the rate has been reduced.

threshold

In the first part of this example, we rejected the null hypothesis when \(\alpha = 0.05\). And, in the second part of this example, we failed to reject the null hypothesis when \(\alpha = 0.01\). There must be some level of \(\alpha\), then, in which we cross the threshold from rejecting to not rejecting the null hypothesis. What is the smallest \(\alpha \text{ -level}\) that would still cause us to reject the null hypothesis?

We would, of course, reject any time the critical value was smaller than our test statistic −1.92:

That is, we would reject if the critical value were −1.645, −1.83, and −1.92. But, we wouldn't reject if the critical value were −1.93. The \(\alpha \text{ -level}\) associated with the test statistic −1.92 is called the P -value . It is the smallest \(\alpha \text{ -level}\) that would lead to rejection. In this case, the P -value is:

P ( Z < −1.92) = 0.0274

So far, all of the examples we've considered have involved a one-tailed hypothesis test in which the alternative hypothesis involved either a less than (<) or a greater than (>) sign. What happens if we weren't sure of the direction in which the proportion could deviate from the hypothesized null value? That is, what if the alternative hypothesis involved a not-equal sign (≠)? Let's take a look at an example.

two zebra tails

What if we wanted to perform a " two-tailed " test? That is, what if we wanted to test:

\(H_0 \colon p = 0.90\) versus \(H_A \colon p \ne 0.90\)

at the \(\alpha = 0.05\) level?

Let's first consider the critical value approach . If we allow for the possibility that the sample proportion could either prove to be too large or too small, then we need to specify a threshold value, that is, a critical value, in each tail of the distribution. In this case, we divide the " significance level " \(\alpha\) by 2 to get \(\alpha/2\):

That is, our rejection rule is that we should reject the null hypothesis \(H_0 \text{ if } Z ≥ 1.96\) or we should reject the null hypothesis \(H_0 \text{ if } Z ≤ −1.96\). Alternatively, we can write that we should reject the null hypothesis \(H_0 \text{ if } |Z| ≥ 1.96\). Because our test statistic is −1.92, we just barely fail to reject the null hypothesis, because 1.92 < 1.96. In this case, we would say that there is insufficient evidence at the \(\alpha = 0.05\) level to conclude that the sample proportion differs significantly from 0.90.

Now for the P -value approach . Again, needing to allow for the possibility that the sample proportion is either too large or too small, we multiply the P -value we obtain for the one-tailed test by 2:

That is, the P -value is:

\(P=P(|Z|\geq 1.92)=P(Z>1.92 \text{ or } Z<-1.92)=2 \times 0.0274=0.055\)

Because the P -value 0.055 is (just barely) greater than the significance level \(\alpha = 0.05\), we barely fail to reject the null hypothesis. Again, we would say that there is insufficient evidence at the \(\alpha = 0.05\) level to conclude that the sample proportion differs significantly from 0.90.

Let's close this example by formalizing the definition of a P -value, as well as summarizing the P -value approach to conducting a hypothesis test.

The P -value is the smallest significance level \(\alpha\) that leads us to reject the null hypothesis.

Alternatively (and the way I prefer to think of P -values), the P -value is the probability that we'd observe a more extreme statistic than we did if the null hypothesis were true.

If the P -value is small, that is, if \(P ≤ \alpha\), then we reject the null hypothesis \(H_0\).

writing hand

By the way, to test \(H_0 \colon p = p_0\), some statisticians will use the test statistic:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}}\)

rather than the one we've been using:

One advantage of doing so is that the interpretation of the confidence interval — does it contain \(p_0\)? — is always consistent with the hypothesis test decision, as illustrated here:

For the sake of ease, let:

\(se(\hat{p})=\sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}\)

Two-tailed test. In this case, the critical region approach tells us to reject the null hypothesis \(H_0 \colon p = p_0\) against the alternative hypothesis \(H_A \colon p \ne p_0\):

if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \geq z_{\alpha/2}\) or if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \leq -z_{\alpha/2}\)

which is equivalent to rejecting the null hypothesis:

if \(\hat{p}-p_0 \geq z_{\alpha/2}se(\hat{p})\) or if \(\hat{p}-p_0 \leq -z_{\alpha/2}se(\hat{p})\)

if \(p_0 \geq \hat{p}+z_{\alpha/2}se(\hat{p})\) or if \(p_0 \leq \hat{p}-z_{\alpha/2}se(\hat{p})\)

That's the same as saying that we should reject the null hypothesis \(H_0 \text{ if } p_0\) is not in the \(\left(1-\alpha\right)100\%\) confidence interval!

Left-tailed test. In this case, the critical region approach tells us to reject the null hypothesis \(H_0 \colon p = p_0\) against the alternative hypothesis \(H_A \colon p < p_0\):

if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \leq -z_{\alpha}\)

if \(\hat{p}-p_0 \leq -z_{\alpha}se(\hat{p})\)

if \(p_0 \geq \hat{p}+z_{\alpha}se(\hat{p})\)

That's the same as saying that we should reject the null hypothesis \(H_0 \text{ if } p_0\) is not in the upper \(\left(1-\alpha\right)100\%\) confidence interval:

\((0,\hat{p}+z_{\alpha}se(\hat{p}))\)

9.4 - Comparing Two Proportions

So far, all of our examples involved testing whether a single population proportion p equals some value \(p_0\). Now, let's turn our attention for a bit towards testing whether one population proportion \(p_1\) equals a second population proportion \(p_2\). Additionally, most of our examples thus far have involved left-tailed tests in which the alternative hypothesis involved \(H_A \colon p < p_0\) or right-tailed tests in which the alternative hypothesis involved \(H_A \colon p > p_0\). Here, let's consider an example that tests the equality of two proportions against the alternative that they are not equal. Using statistical notation, we'll test:

\(H_0 \colon p_1 = p_2\) versus \(H_A \colon p_1 \ne p_2\)

Example 9-5

cigarette butt

Time magazine reported the result of a telephone poll of 800 adult Americans. The question posed of the Americans who were surveyed was: "Should the federal tax on cigarettes be raised to pay for health care reform?" The results of the survey were:

Non- Smokers Smokers

\(n_1 = 605\)
\(y_1 = 351 \text { said "yes"}\)
\(\hat{p}_1 = \dfrac{351}{605} = 0.58\)

\(n_2 = 195\)
\(y_2 = 41 \text { said "yes"}\)
\(\hat{p}_2 = \dfrac{41}{195} = 0.21\)

Is there sufficient evidence at the \(\alpha = 0.05\), say, to conclude that the two populations — smokers and non-smokers — differ significantly with respect to their opinions?

If \(p_1\) = the proportion of the non-smoker population who reply "yes" and \(p_2\) = the proportion of the smoker population who reply "yes," then we are interested in testing the null hypothesis:

\(H_0 \colon p_1 = p_2\)

against the alternative hypothesis:

\(H_A \colon p_1 \ne p_2\)

Before we can actually conduct the hypothesis test, we'll have to derive the appropriate test statistic.

The test statistic for testing the difference in two population proportions, that is, for testing the null hypothesis \(H_0:p_1-p_2=0\) is:

\(Z=\dfrac{(\hat{p}_1-\hat{p}_2)-0}{\sqrt{\hat{p}(1-\hat{p})\left(\dfrac{1}{n_1}+\dfrac{1}{n_2}\right)}}\)

\(\hat{p}=\dfrac{Y_1+Y_2}{n_1+n_2}\)

the proportion of "successes" in the two samples combined.

Recall that:

\(\hat{p}_1-\hat{p}_2\)

is approximately normally distributed with mean:

\(p_1-p_2\)

and variance:

\(\dfrac{p_1(1-p_1)}{n_1}+\dfrac{p_2(1-p_2)}{n_2}\)

But, if we assume that the null hypothesis is true, then the population proportions equal some common value p , say, that is, \(p_1 = p_2 = p\). In that case, then the variance becomes:

\(p(1-p)\left(\dfrac{1}{n_1}+\dfrac{1}{n_2}\right)\)

So, under the assumption that the null hypothesis is true, we have that:

\( {\displaystyle Z=\frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)- \color{blue}\overbrace{\color{black}\left(p_{1}-p_{2}\right)}^0}{\sqrt{p(1-p)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}} } \)

follows (at least approximately) the standard normal N (0,1) distribution. Since we don't know the (assumed) common population proportion p any more than we know the proportions \(p_1\) and \(p_2\) of each population, we can estimate p using:

the proportion of "successes" in the two samples combined. And, hence, our test statistic becomes:

as was to be proved.

Example 9-5 (continued)

cigarette

Non- Smokers Smokers

\(n_1 = 605\)
\(y_1 =351 \text { said "yes"}\)
\(\hat{p}_1 = \dfrac{351}{605} = 0.58\)

\(n_2 = 195\)
\(y_2 = 41 \text { said "yes"}\)
\(\hat{p}_2 = \dfrac{41}{195} = 0.21\)

The overall sample proportion is:

\(\hat{p}=\dfrac{41+351}{195+605}=\dfrac{392}{800}=0.49\)

That implies then that the test statistic for testing:

\(H_0:p_1=p_2\) versus \(H_0:p_1 \neq p_2\)

\(Z=\dfrac{(0.58-0.21)-0}{\sqrt{0.49(0.51)\left(\dfrac{1}{195}+\dfrac{1}{605}\right)}}=8.99\)

Errr.... that Z -value is off the charts, so to speak. Let's go through the formalities anyway making the decision first using the rejection region approach, and then using the P -value approach. Putting half of the rejection region in each tail, we have:

That is, we reject the null hypothesis \(H_0\) if \(Z ≥ 1.96\) or if \(Z ≤ −1.96\). We clearly reject \(H_0\), since 8.99 falls in the "red zone," that is, 8.99 is (much) greater than 1.96. There is sufficient evidence at the 0.05 level to conclude that the two populations differ with respect to their opinions concerning imposing a federal tax to help pay for health care reform.

Now for the P -value approach:

That is, the P -value is less than 0.0001. Because \(P < 0.0001 ≤ \alpha = 0.05\), we reject the null hypothesis. Again, there is sufficient evidence at the 0.05 level to conclude that the two populations differ with respect to their opinions concerning imposing a federal tax to help pay for health care reform.

Thankfully, as should always be the case, the two approaches.... the critical value approach and the P -value approach... lead to the same conclusion

For testing \(H_0 \colon p_1 = p_2\), some statisticians use the test statistic:

\(Z=\dfrac{(\hat{p}_1-\hat{p}_2)-0}{\sqrt{\dfrac{\hat{p}_1(1-\hat{p}_1)}{n_1}+\dfrac{\hat{p}_2(1-\hat{p}_2)}{n_2}}}\)

instead of the one we used:

An advantage of doing so is again that the interpretation of the confidence interval — does it contain 0? — is always consistent with the hypothesis test decision.

9.5 - Using Minitab

Hypothesis test for a single proportion.

To illustrate how to tell Minitab to perform a Z -test for a single proportion, let's refer to the lung cancer example that appeared on the page called The P -Value Approach.

Under the Stat menu, select Basic Statistics , and then 1 Proportion... :

In the pop-up window that appears, click on the radio button labeled Summarized data . In the box labeled Number of events , type in the number of successes or events of interest, and in the box labeled Number of trials , type in the sample size n. Click on the box labeled Perform hypothesis test , and in the box labeled Hypothesized proportion , type in the value of the proportion assumed in the null hypothesis:

Click on the button labeled Options... In the pop-up window that appears, for the box labeled Alternative , select either less than , greater than , or not equal depending on the direction of the alternative hypothesis. Click on the box labeled Use test and interval based on normal distribution :

Then, click OK to return to the main pop-up window.

Then, upon clicking OK on the main pop-up window, the output should appear in the Session window:

Test of P = 0.9 vs p < 0.9
Sample X N Sample P 95% Upper Bound Z-Value P-Value
1 128 150 0.853333   0.900846 -1.91 0.028

Using the normal approximation.

As you can see, Minitab reports not only the value of the test statistic ( Z = −1.91) but also the P -value (0.028) and the 95% confidence interval (one-sided in this case, because of the one-sided hypothesis).

Hypothesis Test for Comparing Two Proportions

To illustrate how to tell Minitab to perform a Z -test for comparing two population proportions, let's refer to the smoker survey example that appeared on the page called Comparing Two Proportions.

Under the Stat menu, select Basic Statistics , and then 2 Proportions... :

In the pop-up window that appears, click on the radio button labeled Summarized data . In the boxes labeled Events , type in the number of successes or events of interest for both the First and Second samples. And in the boxes labeled Trials , type in the size \(n_1\) of the First sample and the size \(n_2\) of the Second sample:

Click on the button labeled Options... In the pop-up window that appears, in the box labeled Test difference , type in the assumed value of the difference in the proportions that appears in the null hypothesis. The default value is 0.0, the value most commonly assumed, as it means that we are interested in testing for the equality of the population proportions. For the box labeled Alternative , select either less than , greater than , or not equal depending on the direction of the alternative hypothesis. Click on the box labeled Use pooled estimate of p for test :

Sample X N Sample P
1 351 605 0.580165
2 41 195 0.210256

Difference = p (1) - p (2) Estimate for difference:  0.369909 95% CI for difference: (0.0300499, 0.439319) T-Test of difference = 0 (vs not =0):  Z = 8.99   P-Value = 0.000 Fischer's exact test:  P-Value = 0.000

Again, as you can see, Minitab reports not only the value of the test statistic ( Z = 8.99) but other useful things as well, including the P -value, which in this case is so small as to be deemed to be 0.000 to three digits. For scientific reporting purposes, we would typically write that as P < 0.0001.

Lesson 10: Tests About One Mean

In this lesson, we'll continue our investigation of hypothesis testing. In this case, we'll focus our attention on a hypothesis test for a population mean \(\mu\) for three situations:

  • a hypothesis test based on the normal distribution for the mean \(\mu\) for the completely unrealistic situation that the population variance \(\sigma^2\) is known
  • a hypothesis test based on the \(t\)-distribution for the mean \(\mu\) for the (much more) realistic situation that the population variance \(\sigma^2\) is unknown
  • a hypothesis test based on the \(t\)-distribution for \(\mu_D\), the mean difference in the responses of two dependent populations

10.1 - Z-Test: When Population Variance is Known

Let's start by acknowledging that it is completely unrealistic to think that we'd find ourselves in the situation of knowing the population variance, but not the population mean. Therefore, the hypothesis testing method that we learn on this page has limited practical use. We study it only because we'll use it later to learn about the "power" of a hypothesis test (by learning how to calculate Type II error rates). As usual, let's start with an example.

Example 10-1

boy playing

Boys of a certain age are known to have a mean weight of \(\mu=85\) pounds. A complaint is made that the boys living in a municipal children's home are underfed. As one bit of evidence, \(n=25\) boys (of the same age) are weighed and found to have a mean weight of \(\bar{x}\) = 80.94 pounds. It is known that the population standard deviation \(\sigma\) is 11.6 pounds (the unrealistic part of this example!). Based on the available data, what should be concluded concerning the complaint?

The null hypothesis is \(H_0:\mu=85\), and the alternative hypothesis is \(H_A:\mu<85\). In general, we know that if the weights are normally distributed, then:

\(Z=\dfrac{\bar{X}-\mu}{\sigma/\sqrt{n}}\)

follows the standard normal \(N(0,1)\) distribution. It is actually a bit irrelevant here whether or not the weights are normally distributed, because the same size \(n=25\) is large enough for the Central Limit Theorem to apply. In that case, we know that \(Z\), as defined above, follows at least approximately the standard normal distribution. At any rate, it seems reasonable to use the test statistic:

\(Z=\dfrac{\bar{X}-\mu_0}{\sigma/\sqrt{n}}\)

for testing the null hypothesis

\(H_0:\mu=\mu_0\)

against any of the possible alternative hypotheses \(H_A:\mu \neq \mu_0\), \(H_A:\mu<\mu_0\), and \(H_A:\mu>\mu_0\).

For the example in hand, the value of the test statistic is:

\(Z=\dfrac{80.94-85}{11.6/\sqrt{25}}=-1.75\)

The critical region approach tells us to reject the null hypothesis at the \(\alpha=0.05\) level if \(Z<-1.645\). Therefore, we reject the null hypothesis because \(Z=-1.75<-1.645\), and therefore falls in the rejection region:

As always, we draw the same conclusion by using the \(p\)-value approach. Recall that the \(p\)-value approach tells us to reject the null hypothesis at the \(\alpha=0.05\) level if the \(p\)-value \(\le \alpha=0.05\). In this case, the \(p\)-value is \(P(Z<-1.75)=0.0401\):

As expected, we reject the null hypothesis because the \(p\)-value \(=0.0401<\alpha=0.05\).

By the way, we'll learn how to ask Minitab to conduct the \(Z\)-test for a mean \(\mu\) in a bit, but this is what the Minitab output for this example looks like this:

Test of mu = 85 vs  < 85
The assumed standard deviation = 11.6
N Mean SE Mean 95% Upper Bound Z P
25 80.9400 2.3200 84.7561 -1.75 0.040

10.2 - T-Test: When Population Variance is Unknown

Now that, for purely pedagogical reasons, we have the unrealistic situation (of a known population variance) behind us, let's turn our attention to the realistic situation in which both the population mean and population variance are unknown.

Example 10-2

waikiki

It is assumed that the mean systolic blood pressure is \(\mu\) = 120 mm Hg. In the Honolulu Heart Study, a sample of \(n=100\) people had an average systolic blood pressure of 130.1 mm Hg with a standard deviation of 21.21 mm Hg. Is the group significantly different (with respect to systolic blood pressure!) from the regular population?

The null hypothesis is \(H_0:\mu=120\), and because there is no specific direction implied, the alternative hypothesis is \(H_A:\mu\ne 120\). In general, we know that if the data are normally distributed, then:

\(T=\dfrac{\bar{X}-\mu}{S/\sqrt{n}}\)

follows a \(t\)-distribution with \(n-1\) degrees of freedom. Therefore, it seems reasonable to use the test statistic:

\(T=\dfrac{\bar{X}-\mu_0}{S/\sqrt{n}}\)

for testing the null hypothesis \(H_0:\mu=\mu_0\) against any of the possible alternative hypotheses \(H_A:\mu \neq \mu_0\), \(H_A:\mu<\mu_0\), and \(H_A:\mu>\mu_0\). For the example in hand, the value of the test statistic is:

\(t=\dfrac{130.1-120}{21.21/\sqrt{100}}=4.762\)

The critical region approach tells us to reject the null hypothesis at the \(\alpha=0.05\) level if \(t\ge t_{0.025, 99}=1.9842\) or if \(t\le t_{0.025, 99}=-1.9842\). Therefore, we reject the null hypothesis because \(t=4.762>1.9842\), and therefore falls in the rejection region:

Again, as always, we draw the same conclusion by using the \(p\)-value approach. The \(p\)-value approach tells us to reject the null hypothesis at the \(\alpha=0.05\) level if the \(p\)-value \(\le \alpha=0.05\). In this case, the \(p\)-value is \(2 \times P(T_{99}>4.762)<2\times P(T_{99}>1.9842)=2(0.025)=0.05\):

As expected, we reject the null hypothesis because \(p\)-value \(\le 0.01<\alpha=0.05\).

Again, we'll learn how to ask Minitab to conduct the t -test for a mean \(\mu\) in a bit, but this is what the Minitab output for this example looks like:

Test of mu = 120 vs not = 120
N Mean StDev SE Mean 95% CI T P
100 130.100 21.210 2.121 (125.891, 134.309) 4.76 0.000

By the way, the decision to reject the null hypothesis is consistent with the one you would make using a 95% confidence interval. Using the data, a 95% confidence interval for the mean \(\mu\) is:

\(\bar{x}\pm t_{0.025,99}\left(\dfrac{s}{\sqrt{n}}\right)=130.1 \pm 1.9842\left(\dfrac{21.21}{\sqrt{100}}\right)\)

which simplifies to \(130.1\pm 4.21\). That is, we can be 95% confident that the mean systolic blood pressure of the Honolulu population is between 125.89 and 134.31 mm Hg. How can a population living in a climate with consistently sunny 80 degree days have elevated blood pressure?!

Anyway, the critical region approach for the \(\alpha=0.05\) hypothesis test tells us to reject the null hypothesis that \(\mu=120\):

if \(t=\dfrac{\bar{x}-\mu_0}{s/\sqrt{n}}\geq 1.9842\) or if \(t=\dfrac{\bar{x}-\mu_0}{s/\sqrt{n}}\leq -1.9842\)

which is equivalent to rejecting:

if \(\bar{x}-\mu_0 \geq 1.9842\left(\dfrac{s}{\sqrt{n}}\right)\) or if \(\bar{x}-\mu_0 \leq -1.9842\left(\dfrac{s}{\sqrt{n}}\right)\)

if \(\mu_0 \leq \bar{x}-1.9842\left(\dfrac{s}{\sqrt{n}}\right)\) or if \(\mu_0 \geq \bar{x}+1.9842\left(\dfrac{s}{\sqrt{n}}\right)\)

which, upon inserting the data for this particular example, is equivalent to rejecting:

if \(\mu_0 \leq 125.89\) or if \(\mu_0 \geq 134.31\)

which just happen to be (!) the endpoints of the 95% confidence interval for the mean. Indeed, the results are consistent!

10.3 - Paired T-Test

In the next lesson, we'll learn how to compare the means of two independent populations, but there may be occasions in which we are interested in comparing the means of two dependent populations. For example, suppose a researcher is interested in determining whether the mean IQ of the population of first-born twins differs from the mean IQ of the population of second-born twins. She identifies a random sample of \(n\) pairs of twins, and measures \(X\), the IQ of the first-born twin, and \(Y\), the IQ of the second-born twin. In that case, she's interested in determining whether:

\(\mu_X=\mu_Y\)

or equivalently if:

\(\mu_X-\mu_Y=0\)

Now, the population of first-born twins is not independent of the population of second-born twins. Since all of our distributional theory requires the independence of measurements, we're rather stuck. There's a way out though... we can "remove" the dependence between \(X\) and \(Y\) by subtracting the two measurements \(X_i\) and \(Y_i\) for each pair of twins \(i\), that is, by considering the independent measurements

\(D_i=X_i-Y_i\)

Then, our null hypothesis involves just a single mean, which we'll denote \(\mu_D\), the mean of the differences:

\(H_0=\mu_D=\mu_X-\mu_Y=0\)

and then our hard work is done! We can just use the \(t\)-test for a mean for conducting the hypothesis test... it's just that, in this situation, our measurements are differences \(d_i\) whose mean is \(\bar{d}\) and standard deviation is \(s_D\). That is, when testing the null hypothesis \(H_0:\mu_D=\mu_0\) against any of the alternative hypotheses \(H_A:\mu_D \neq \mu_0\), \(H_A:\mu_D<\mu_0\), and \(H_A:\mu_D>\mu_0\), we compare the test statistic:

\(t=\dfrac{\bar{d}-\mu_0}{s_D/\sqrt{n}}\)

to a \(t\)-distribution with \(n-1\) degrees of freedom. Let's take a look at an example!

Example 10-3

blood in tubes

Blood samples from \(n=10\) = 10 people were sent to each of two laboratories (Lab 1 and Lab 2) for cholesterol determinations. The resulting data are summarized here:

Subject  Lab 1 Lab 2 Diff

1

296 318 -22
2 268 287 -19
. . . .
. . . .
. . . .
10 262 285 -23
  \(\bar{x}_{1}=260.6\) \(\bar{x}_{2}=275\) \(\begin{array}{c}
\bar{d}=-14.4 \\
s_{d}=6.77
\end{array}\)

Is there a statistically significant difference at the \(\alpha=0.01\) level, say, in the (population) mean cholesterol levels reported by Lab 1 and Lab 2?

The null hypothesis is \(H_0:\mu_D=0\), and the alternative hypothesis is \(H_A:\mu_D\ne 0\). The value of the test statistic is:

\(t=\dfrac{-14.4-0}{6.77/\sqrt{10}}=-6.73\)

The critical region approach tells us to reject the null hypothesis at the \(\alpha=0.01\) level if \(t>t_{0.005, 9}=3.25\) or if \(t<t_{0.005, 9}=-3.25\). Therefore, we reject the null hypothesis because \(t=-6.73<-3.25\), and therefore falls in the rejection region.

Again, we draw the same conclusion when using the \(p\)-value approach. In this case, the \(p\)-value is:

\(p-\text{value }=2\times P(T_9<-6.73)\le 2\times 0.005=0.01\)

As expected, we reject the null hypothesis because \(p\)-value \(\le 0.01=\alpha\).

And, the Minitab output for this example looks like this:

Test of mu = 0 vs  not = 0
N Mean StDev SE Mean 95% CI T P
10 -14.4000 6.7700 2.1409 (-19.2430,  -9.5570) -6.73 0.000

10.4 - Using Minitab

Z-test for a single mean.

To illustrate how to tell Minitab to perform a Z -test for a single mean, let's refer to the boys weight example that appeared on the page called The Z -test: When Population Variance is Known.

Under the Stat menu, select Basic Statistics , and then 1-Sample Z... :

In the pop-up window that appears, click on the radio button labeled Summarized data . In the box labeled Sample size , type in the sample size n , and in the box labeled Mean , type in the sample mean. In the box labeled Standard deviation , type in the value of the known (or rather assumed!) population standard deviation. Click on the box labeled Perform hypothesis test , and in the box labeled Hypothesized mean , type in the value of the mean assumed in the null hypothesis:

Click on the button labeled Options... In the pop-up window that appears, for the box labeled Alternative , select either less than , greater than , or not equal depending on the direction of the alternative hypothesis:

Test of mu = 85 vs  < 85
The assumed standard deviation = 11.6
N Mean SE Mean 95% Upper Bound Z P
25 80.94 2.32 84.76 -1.75 0.040

T-test for a Single Mean

To illustrate how to tell Minitab to perform a t -test for a single mean, let's refer to the systolic blood pressure example that appeared on the page called The T -test: When Population Variance is Unknown.

Under the Stat menu, select Basic Statistics , and then 1-Sample t... :

In the pop-up window that appears, click on the radio button labeled Summarized data . In the box labeled Sample size , type in the sample size n; in the box labeled Mean , type in the sample mean; and in the box labeled Standard deviation , type in the sample standard deviation. Click on the box labeled Perform hypothesis test , and in the box labeled Hypothesized mean , type in the value of the mean assumed in the null hypothesis:

Test of mu = 120 vs  not = 120
N Mean StDev SE Mean 95% CI T P
100 130.10 21.21 2.12 (125.89,  134.31) 4.76 0.000

(5) Note that a paired t -test can be performed in the same way. The summarized sample data would simply be the summarized differences. The extra step of calculating the differences would be required, however, if your data are the raw measurements from the two dependent samples. That is, if you have two columns containing, say, Before and After measurements for which you want to analyze Diff, their differences, you can use Minitab's calculator (under the Calc menu, select Calculator ) to calculate the differences:

Upon clicking OK , the differences (Diff) should appear in your worksheet:

When performing the t -test, you'll then need to tell Minitab (in the Samples in columns box) that the differences are contained in the Diff column:

Here's what the paired t -test output would look like for this example:

One Sample T: Diff

Variable N Mean StDev SE Mean 95% CI T P
Diff 7 2.000 1.414 0.535 (0.692,  3.308) 3.74 0.010

Lesson 11: Tests of the Equality of Two Means

In this lesson, we'll continue our investigation of hypothesis testing. In this case, we'll focus our attention on a hypothesis test for the difference in two population means \(\mu_1-\mu_2\) for two situations:

  • a hypothesis test based on the \(t\)-distribution, known as the pooled two-sample \(t\)-test , for \(\mu_1-\mu_2\) when the (unknown) population variances \(\sigma^2_X\) and \(\sigma^2_Y\) are equal
  • a hypothesis test based on the \(t\)-distribution, known as Welch's \(t\)-test , for \(\mu_1-\mu_2\) when the (unknown) population variances \(\sigma^2_X\) and \(\sigma^2_Y\) are not equal

Of course, because population variances are generally not known, there is no way of being 100% sure that the population variances are equal or not equal. In order to be able to determine, therefore, which of the two hypothesis tests we should use, we'll need to make some assumptions about the equality of the variances based on our previous knowledge of the populations we're studying.

11.1 - When Population Variances Are Equal

Let's start with the good news, namely that we've already done the dirty theoretical work in developing a hypothesis test for the difference in two population means \(\mu_1-\mu_2\) when we developed a \((1-\alpha)100\%\) confidence interval for the difference in two population means. Recall that if you have two independent samples from two normal distributions with equal variances \(\sigma^2_X=\sigma^2_Y=\sigma^2\), then:

\(T=\dfrac{(\bar{X}-\bar{Y})-(\mu_X-\mu_Y)}{S_p\sqrt{\dfrac{1}{n}+\dfrac{1}{m}}}\)

follows a \(t_{n+m-2}\) distribution where \(S^2_p\), the pooled sample variance:

\(S_p^2=\dfrac{(n-1)S^2_X+(m-1)S^2_Y}{n+m-2}\)

is an unbiased estimator of the common variance \(\sigma^2\). Therefore, if we're interested in testing the null hypothesis:

\(H_0:\mu_X-\mu_Y=0\) (or equivalently \(H_0:\mu_X=\mu_Y\))

against any of the alternative hypotheses:

\(H_A:\mu_X-\mu_Y \neq 0,\quad H_A:\mu_X-\mu_Y < 0,\text{ or }H_A:\mu_X-\mu_Y > 0\)

we can use the test statistic:

and follow the standard hypothesis testing procedures. Let's take a look at an example.

Example 11-1

car driving fast

A psychologist was interested in exploring whether or not male and female college students have different driving behaviors. There were several ways that she could quantify driving behaviors. She opted to focus on the fastest speed ever driven by an individual. Therefore, the particular statistical question she framed was as follows:

Is the mean fastest speed driven by male college students different than the mean fastest speed driven by female college students?

She conducted a survey of a random \(n=34\) male college students and a random \(m=29\) female college students. Here is a descriptive summary of the results of her survey:

Males Females

\(n = 34\)
\(\bar{x} = 105.5\)
\(s_x = 20.1\)

\(m = 29\)
\(\bar{y} = 90.9\)
\(s_y = 12.2\)

and here is a graphical summary of the data in the form of a dotplot:

Is there sufficient evidence at the \(\alpha=0.05\) level to conclude that the mean fastest speed driven by male college students differs from the mean fastest speed driven by female college students?

Because the observed standard deviations of the two samples are of similar magnitude, we'll assume that the population variances are equal. Let's also assume that the two populations of fastest speed driven for males and females are normally distributed. (We can confirm, or deny, such an assumption using a normal probability plot, but let's simplify our analysis for now.) The randomness of the two samples allows us to assume independence of the measurements as well.

Okay, assumptions all met, we can test the null hypothesis:

\(H_0:\mu_M-\mu_F=0\)

\(H_A:\mu_M-\mu_F \neq 0\)

using the test statistic:

\(t=\dfrac{(105.5-90.9)-0}{16.9 \sqrt{\dfrac{1}{34}+\dfrac{1}{29}}}=3.42\)

because, among other things, the pooled sample standard deviation is:

\(s_p=\sqrt{\dfrac{33(20.1^2)+28(12.2^2)}{61}}=16.9\)

The critical value approach tells us to reject the null hypothesis in favor of the alternative hypothesis if:

\(|t|\geq t_{\alpha/2,n+m-2}=t_{0.025,61}=1.9996\)

We reject the null hypothesis because the test statistic (\(t=3.42\)) falls in the rejection region:

There is sufficient evidence at the \(\alpha=0.05\) level to conclude that the average fastest speed driven by the population of male college students differs from the average fastest speed driven by the population of female college students.

Not surprisingly, the decision is the same using the \(p\)-value approach. The \(p\)-value is 0.0012:

\(P=2\times P(T_{61}>3.42)=2(0.0006)=0.0012\)

Therefore, because \(p=0.0012\le \alpha=0.05\), we reject the null hypothesis in favor of the alternative hypothesis. Again, we conclude that there is sufficient evidence at the \(\alpha=0.05\) level to conclude that the average fastest speed driven by the population of male college students differs from the average fastest speed driven by the population of female college students.

By the way, we'll see how to tell Minitab to conduct a two-sample t -test in a bit here, but in the meantime, this is what the output would look like:

Two-Sample T:   For Fastest

Gender N Mean StDev SE Mean
1 34 105.5 20.1 3.4
2 29 90.9 12.2 2.3

Difference = mu (1) - mu (2) Estimate for difference: 14.6085 95% CI for difference: (6.0630, 23.1540) T-Test of difference = 0 (vs not =) :   T-Value = 3.42    P-Value = 0.001   DF = 61 Both use Pooled StDev = 16.9066

11.2 - When Population Variances Are Not Equal

Let's again start with the good news that we've already done the dirty theoretical work here. Recall that if you have two independent samples from two normal distributions with unequal variances \(\sigma^2_X \neq \sigma^2_Y\), then:

\(T=\dfrac{(\bar{X}-\bar{Y})-(\mu_X-\mu_Y)}{\sqrt{\dfrac{S^2_X}{n}+\dfrac{S^2_Y}{m}}}\)

follows, at least approximately, a \(t_r\) distribution where \(r\), the adjusted degrees of freedom is determined by the equation:

\(r=\dfrac{\left(\dfrac{s^2_X}{n}+\dfrac{s^2_Y}{m}\right)^2}{\dfrac{(s^2_X/n)^2}{n-1}+\dfrac{(s^2_Y/m)^2}{m-1}}\)

If r doesn't equal an integer, as it usually doesn't, then we take the integer portion of \(r\). That is, we use \(\lfloor r\rfloor\) if necessary.

With that now being recalled, if we're interested in testing the null hypothesis:

and follow the standard hypothesis testing procedures. Let's return to our fastest speed driven example.

Example 11-1 (Continued)

car driving fast around a corner

A psychologist was interested in exploring whether or not male and female college students have different driving behaviors. There were a number of ways that she could quantify driving behaviors. She opted to focus on the fastest speed ever driven by an individual. Therefore, the particular statistical question she framed was as follows:

This time let's not assume that the population variances are equal. Then, we'll see if we arrive at a different conclusion. Let's still assume though that the two populations of fastest speed driven for males and females are normally distributed. And, we'll again permit the randomness of the two samples to allow us to assume independence of the measurements as well.

That said, then we can test the null hypothesis:

comparing the test statistic:

\(t=\dfrac{(105.5-90.9)-0}{\sqrt{\dfrac{20.1^2}{34}+\dfrac{12.2^2}{29}}}=3.54\)

to a \(T\) distribution with \(r\) degrees of freedom, where:

\(r=\dfrac{\left(\dfrac{12.2^2}{29}+\dfrac{20.1^2}{34} \right)^2}{\left( \dfrac{1}{28}\right)\left(\dfrac{12.2^2}{29} \right)^2+\left(\dfrac{1}{33}\right)\left(\dfrac{20.1^2}{34} \right)^2}=55.5\)

Oops... that's not an integer, so we're going to need to take the greatest integer portion of that \(r\). That is, we take the degrees of freedom to be \(\lfloor r\rfloor = \lfloor 55.5\rfloor=55\).

Then, the critical value approach tells us to reject the null hypothesis in favor of the alternative hypothesis if:

\(t>t_{0.025,55}=2.004\)

We reject the null hypothesis because the test statistic (\(t=3.54\)) falls in the rejection region:

There is (again!) sufficient evidence at the \(\alpha=0.05\) level to conclude that the average fastest speed driven by the population of male college students differs from the average fastest speed driven by the population of female college students.

And again, the decision is the same using the \(p\)-value approach. The \(p\)-value is 0.0008:

\(P=2\times P(T_{55}>3.54)=2(0.0004)=0.0008\)

Therefore, because \(p=0.008\le \alpha=0.05\), we reject the null hypothesis in favor of the alternative hypothesis. Again, we conclude that there is sufficient evidence at the \(\alpha=0.05\) level to conclude that the average fastest speed driven by the population of male college students differs from the average fastest speed driven by the population of female college students.

At any rate, we see that in this case, our conclusion is the same regardless of whether or not we assume equality of the population variances.

And, just in case you're interested... we'll see how to tell Minitab to conduct a Welch's \(t\)-test very soon, but in the meantime, this is what the output would look like for this example:

Difference = mu (1) - mu (2) Estimate for difference: 14.6085 95% CI for difference: (6.3575, 22.8596) T-Test of difference = 0 (vs not =) :   T-Value = 3.55    P-Value = 0.001   DF = 55

11.3 - Using Minitab

Just as is the case for asking Minitab to calculate pooled t -intervals and Welch's t -intervals for \(\mu_1-\mu_2\), the commands necessary for asking Minitab to perform a two-sample t -test or a Welch's t -test depend on whether the data are entered in two columns, or the data are entered in one column with a grouping variable in a second column.

Let's recall the spider and prey example, in which the feeding habits of two species of net-casting spiders were studied. The species, the deinopis , and menneus coexist in eastern Australia. The following data were obtained on the size, in millimeters, of the prey of random samples of the two species:

Size of Random Pray Samples of the Deinopis Spider in Millimeters
sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9 sample 10
12.9 10.2 7.4 7.0 10.5 11.9 7.1 9.9 14.4 11.3
Size of Random Pray Samples of the Menneus Spider in Millimeters
sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9 sample 10
10.2 6.9 10.9 11.0 10.1 5.3 7.5 10.3 9.2 8.8

Let's use the data and Minitab to test whether the mean prey size of the populations of the two types of spiders differs.

When the Data are Entered in Two Columns

Enter the data in two columns, such as:

Under the Stat menu, select Basic Statistics , and then select 2-Sample t... :

In the pop-up window that appears, select Samples in different columns . Specify the name of the First variable, and specify the name of the Second variable. For the two-sample (pooled) t -test, click on the box labeled Assume equal variances . (For Welch's t -test, leave the box labeled Assume equal variances unchecked.):

Two-Sample T:   For Deinopis vs Menneus
Variable N Mean StDev SE Mean
Deinopis 10 10.26 2.51 0.79
Menneus 10 9.02 1.90 0.60

Difference = mu (Deinopis) - mu (Menneus) Estimate for difference:  1.240 95% CI for difference: (-0.852, 3.332) T-Test of difference = 0 (vs not =):  T-Value = 1.25   P-Value = 0.229    DF = 18 Both use Pooled StDev = 2.2266

When the Data are Entered in One Column, and a Grouping Variable in a Second Column

Enter the data in one column (called Prey , say), and the grouping variable in a second column (called Group , say, with 1 denoting a deinopis spider and 2 denoting a menneus spider), such as:

In the pop-up window that appears, select Samples in one column . Specify the name of the Samples variable (Prey, for us) and specify the name of the Subscripts (grouping) variable (Group, for us). For the two-sample (pooled) t -test, click on the box labeled Assume equal variances . (For Welch's t -test, leave the box labeled Assume equal variances unchecked.):

Two-Sample T:   For Prey

Group N Mean StDev SE Mean
1 10 10.26 2.51 0.79
2 10 9.02 1.90 0.60

Difference = mu (1) - mu (2) Estimate for difference:  1.240 95% CI for difference: (-0.852, 3.332) T-Test of difference = 0 (vs not =):  T-Value = 1.25   P-Value = 0.229    DF = 18 Both use Pooled StDev = 2.2266

Lesson 12: Tests for Variances

Continuing our development of hypothesis tests for various population parameters, in this lesson, we'll focus on hypothesis tests for population variances . Specifically, we'll develop:

  • a hypothesis test for testing whether a single population variance \(\sigma^2\) equals a particular value
  • a hypothesis test for testing whether two population variances are equal

12.1 - One Variance

Yeehah again! The theoretical work for developing a hypothesis test for a population variance \(\sigma^2\) is already behind us. Recall that if you have a random sample of size n from a normal population with (unknown) mean \(\mu\) and variance \(\sigma^2\), then:

\(\chi^2=\dfrac{(n-1)S^2}{\sigma^2}\)

follows a chi-square distribution with n −1 degrees of freedom. Therefore, if we're interested in testing the null hypothesis:

\(H_0 \colon \sigma^2=\sigma^2_0\)

\(H_A \colon\sigma^2 \neq \sigma^2_0,\quad H_A \colon\sigma^2<\sigma^2_0,\text{ or }H_A \colon\sigma^2>\sigma^2_0\)

\(\chi^2=\dfrac{(n-1)S^2}{\sigma^2_0}\)

Example 12-1

construction worker wearing a hardhat

A manufacturer of hard safety hats for construction workers is concerned about the mean and the variation of the forces its helmets transmits to wearers when subjected to an external force. The manufacturer has designed the helmets so that the mean force transmitted by the helmets to the workers is 800 pounds (or less) with a standard deviation to be less than 40 pounds. Tests were run on a random sample of n = 40 helmets, and the sample mean and sample standard deviation were found to be 825 pounds and 48.5 pounds, respectively.

Do the data provide sufficient evidence, at the \(\alpha = 0.05\) level, to conclude that the population standard deviation exceeds 40 pounds?

We're interested in testing the null hypothesis:

\(H_0 \colon \sigma^2=40^2=1600\)

\(H_A \colon\sigma^2>1600\)

Therefore, the value of the test statistic is:

\(\chi^2=\dfrac{(40-1)48.5^2}{40^2}=57.336\)

Is the test statistic too large for the null hypothesis to be true? Well, the critical value approach would have us finding the threshold value such that the probability of rejecting the null hypothesis if it were true, that is, of committing a Type I error, is small... 0.05, in this case. Using Minitab (or a chi-square probability table), we see that the cutoff value is 54.572:

That is, we reject the null hypothesis in favor of the alternative hypothesis if the test statistic \(\chi^2\) is greater than 54.572. It is. That is, the test statistic falls in the rejection region:

Therefore, we conclude that there is sufficient evidence, at the 0.05 level, to conclude that the population standard deviation exceeds 40.

Of course, the P -value approach yields the same conclusion. In this case, the P -value is the probablity that we would observe a chi-square(39) random variable more extreme than 57.336:

As the drawing illustrates, the P -value is 0.029 (as determined using the chi-square probability calculator in Minitab). Because \(P = 0.029 ≤ 0.05\), we reject the null hypothesis in favor of the alternative hypothesis.

Do the data provide sufficient evidence, at the \(\alpha = 0.05\) level, to conclude that the population standard deviation differs from 40 pounds?

In this case, we're interested in testing the null hypothesis:

\(H_A \colon\sigma^2 \neq 1600\)

The value of the test statistic remains the same. It is again:

Now, is the test statistic either too large or too small for the null hypothesis to be true? Well, the critical value approach would have us dividing the significance level \(\alpha = 0.05\) into 2, to get 0.025, and putting one of the halves in the left tail, and the other half in the other tail. Doing so (and using Minitab to get the cutoff values), we get that the lower cutoff value is 23.654 and the upper cutoff value is 58.120:

That is, we reject the null hypothesis in favor of the two-sided alternative hypothesis if the test statistic \(\chi^2\) is either smaller than 23.654 or greater than 58.120. It is not. That is, the test statistic does not fall in the rejection region:

Therefore, we fail to reject the null hypothesis. There is insufficient evidence, at the 0.05 level, to conclude that the population standard deviation differs from 40.

Of course, the P -value approach again yields the same conclusion. In this case, we simply double the P -value we obtained for the one-tailed test yielding a P -value of 0.058:

\(P=2\times P\left(\chi^2_{39}>57.336\right)=2\times 0.029=0.058\)

Because \(P = 0.058 > 0.05\), we fail to reject the null hypothesis in favor of the two-sided alternative hypothesis.

The above example illustrates an important fact, namely, that the conclusion for the one-sided test does not always agree with the conclusion for the two-sided test. If you have reason to believe that the parameter will differ from the null value in a particular direction, then you should conduct the one-sided test.

12.2 - Two Variances

Let's now recall the theory necessary for developing a hypothesis test for testing the equality of two population variances. Suppose \(X_1 , X_2 , \dots, X_n\) is a random sample of size n from a normal population with mean \(\mu_X\) and variance \(\sigma^2_X\). And, suppose, independent of the first sample, \(Y_1 , Y_2 , \dots, Y_m\) is another random sample of size m from a normal population with \(\mu_Y\) and variance \(\sigma^2_Y\). Recall then, in this situation, that:

\(\dfrac{(n-1)S^2_X}{\sigma^2_X} \text{ and } \dfrac{(m-1)S^2_Y}{\sigma^2_Y}\)

have independent chi-square distributions with n −1 and m −1 degrees of freedom, respectively. Therefore:

\( {\displaystyle F=\frac{\left[\frac{\color{red}\cancel {\color{black}(n-1)} \color{black}S_{X}^{2}}{\sigma_{x}^{2}} /\color{red}\cancel {\color{black}(n- 1)}\color{black}\right]}{\left[\frac{\color{red}\cancel {\color{black}(m-1)} \color{black}S_{Y}^{2}}{\sigma_{Y}^{2}} /\color{red}\cancel {\color{black}(m-1)}\color{black}\right]}=\frac{S_{X}^{2}}{S_{Y}^{2}} \cdot \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}}} \)

follows an F distribution with n −1 numerator degrees of freedom and m −1 denominator degrees of freedom. Therefore, if we're interested in testing the null hypothesis:

\(H_0 \colon \sigma^2_X=\sigma^2_Y\) (or equivalently \(H_0 \colon\dfrac{\sigma^2_Y}{\sigma^2_X}=1\))

\(H_A \colon \sigma^2_X \neq \sigma^2_Y,\quad H_A \colon \sigma^2_X >\sigma^2_Y,\text{ or }H_A \colon \sigma^2_X <\sigma^2_Y\)

\(F=\dfrac{S^2_X}{S^2_Y}\)

and follow the standard hypothesis testing procedures. When doing so, we might also want to recall this important fact about the F -distribution:

\(F_{1-(\alpha/2)}(n-1,m-1)=\dfrac{1}{F_{\alpha/2}(m-1,n-1)}\)

so that when we use the critical value approach for a two-sided alternative:

\(H_A \colon\sigma^2_X \neq \sigma^2_Y\)

we reject if the test statistic F is too large:

\(F \geq F_{\alpha/2}(n-1,m-1)\)

or if the test statistic F is too small:

\(F \leq F_{1-(\alpha/2)}(n-1,m-1)=\dfrac{1}{F_{\alpha/2}(m-1,n-1)}\)

Okay, let's take a look at an example. In the last lesson, we performed a two-sample t -test (as well as Welch's test) to test whether the mean fastest speed driven by the population of male college students differs from the mean fastest speed driven by the population of female college students. When we performed the two-sample t -test, we just assumed the population variances were equal. Let's revisit that example again to see if our assumption of equal variances is valid.

Example 12-2

car driving fast

A psychologist was interested in exploring whether or not male and female college students have different driving behaviors. The particular statistical question she framed was as follows:

The psychologist conducted a survey of a random \(n = 34\) male college students and a random \(m = 29\) female college students. Here is a descriptive summary of the results of her survey:

Is there sufficient evidence at the \(\alpha = 0.05\) level to conclude that the variance of the fastest speed driven by male college students differs from the variance of the fastest speed driven by female college students?

\(H_0 \colon \sigma^2_X=\sigma^2_Y\)

The value of the test statistic is:

\(F=\dfrac{12.2^2}{20.1^2}=0.368\)

(Note that I intentionally put the variance of what we're calling the Y sample in the numerator and the variance of what we're calling the X sample in the denominator. I did this only so that my results match the Minitab output we'll obtain on the next page. In doing so, we just need to make sure that we keep track of the correct numerator and denominator degrees of freedom.) Using the critical value approach , we divide the significance level \(\alpha = 0.05\) into 2, to get 0.025, and put one of the halves in the left tail, and the other half in the other tail. Doing so, we get that the lower cutoff value is 0.478 and the upper cutoff value is 2.0441:

Because the test statistic falls in the rejection region, that is, because \(F = 0.368 ≤ 0.478\), we reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the \(\alpha = 0.05\) level to conclude that the population variances are not equal. Therefore, the assumption of equal variances that we made when performing the two-sample t -test on these data in the previous lesson does not appear to be valid. It would behoove us to use Welch's t -test instead.

12.3 - Using Minitab

In each case, we'll illustrate how to perform the hypothesis tests of this lesson using summarized data.

Hypothesis Test for One Variance

Under the Stat menu, select Basic Statistics , and then select 1 Variance... :

In the pop-up window that appears, in the box labeled Data , select Sample standard deviation (or alternatively Sample variance ). In the box labeled Sample size , type in the size n of the sample. In the box labeled Sample standard deviation , type in the sample standard deviation. Click on the box labeled Perform hypothesis test , and in the box labeled Value , type in the Hypothesized standard deviation (or alternatively the Hypothesized variance ):

Then, click on OK to return to the main pop-up window.

Method CI for
StDev
CI for
Variance
Chi-Square (39.7,  62.3) (1578,  3878)
Method Test
Statistic
DF P-Value
Chi-Square 57.34 39 0.059

Hypothesis Test for Two Variances

Under the Stat menu, select Basic Statistics , and then select 2 Variances... :

In the pop-up window that appears, in the box labeled Data , select Sample standard deviations (or alternatively Sample variances ). In the box labeled Sample size , type in the size n of the First sample and m of the Second sample. In the box labeled Standard deviation , type in the sample standard deviations for the First and Second samples:

Click on the button labeled Options... In the pop-up window that appears, in the box labeled Value , type in the Hypothesized ratio of the standard deviations (or the Hypothesized ratio of the variances ). For the box labeled Alternative , select either less than , greater than , or not equal depending on the direction of the alternative hypothesis:

Test and CI for Two Variances

Null hypothesis                 Sigma(1)  /  Sigma(2)  = 1 Alternative hypothesis     Sigma(1)  /  Sigma(2)  not  = 1 Significance level              Alpha  =  0.05

Sample N StDev Variance
1 29 12.200 148.840
2 34 20.100 404.010

Ratio of standard deviations  =  0.607 Ratio of variances  =  0.368

95% Confidence Intervals 

Distribution
of Data
CI for StDev Ratio CI for
Variance Ratio
Normal (0.425,  0.877) (0.180,  0.770)
Method DF1 DF2 Test
Statistic
P-Value
F Test (normal) 28 33 0.37 0.009

Lesson 13: One-Factor Analysis of Variance

We previously learned how to compare two population means using either the pooled two-sample t -test or Welch's t -test. What happens if we want to compare more than two means? In this lesson, we'll learn how to do just that. More specifically, we'll learn how to use the analysis of variance method to compare the equality of the (unknown) means \(\mu_1 , \mu_2 , \dots, \mu_m\) of m normal distributions with an unknown but common variance \(\sigma^2\). Take specific note about that last part.... "an unknown but common variance \(\sigma^2\)." That is, the analysis of variance method assumes that the population variances are equal. In that regard, the analysis of variance method can be thought of as an extension of the pooled two-sample t -test.

13.1 - The Basic Idea

We could take a top-down approach by first presenting the theory of analysis of variance and then following it up with an example. We're not going to do it that way though. We're going to take a bottom-up approach, in which we first develop the idea behind the analysis of variance on this page, and then present the results on the next page. Only after we've completed those two steps will we take a step back and look at the theory behind analysis of variance. That said, let's start with our first example of the lesson.

Example 13-1

car tire

A researcher for an automobile safety institute was interested in determining whether or not the distance that it takes to stop a car going 60 miles per hour depends on the brand of the tire. The researcher measured the stopping distance (in feet) of ten randomly selected cars for each of five different brands. So that he and his assistants would remain blinded, the researcher arbitrarily labeled the brands of the tires as Brand1 , Brand2 , Brand3 , Brand4 , and Brand5 . Here are the data resulting from his experiment:

Brand1 Brand2 Brand3 Brand4 Brand5
194 189 185 183 195
184 204 183 193 197
189 190 186 184 194
189 190 183 186 202
188 189 179 194 200
186 207 191 199 211
195 203 188 196 203
186 193 196 188 206
183 181 189 193 202
188 206 194 196 195

Do the data provide enough evidence to conclude that at least one of the brands is different from the others with respect to stopping distance?

The first thing we might want to do is to create some sort of summary plot of the data. Here is a box plot of the data:

Hmmm. It appears that the box plots for Brand1 and Brand5 have very little, if any, overlap at all. The same can be said for Brand3 and Brand5. Here are some summary statistics of the data:

Brand N MEAN SD
1 10 188.20 3.88
2 10 195.20 9.02
3 10 187.40 5.27
4 10 191.20 5.55
5 10 200.50 5.44

It appears that the sample means differ quite a bit. For example, the average stopping distance of Brand3 is 187.4 feet (with a standard deviation of 5.27 feet), while the average stopping distance of Brand5 is 200.5 feet (with a standard deviation of 5.44 feet). A difference of 13 feet could mean the difference between getting into an accident or not. But, of course, we can't draw conclusions about the performance of the brands based on one sample. After all, a different random sample of cars could yield different results. Instead, we need to use the sample means to try to draw conclusions about the population means.

More specifically, the researcher needs to test the null hypothesis that the group population means are all the same against the alternative that at least one group population mean differs from the others. That is, the researcher needs to test this null hypothesis :

\(H_0 \colon \mu_1=\mu_2=\mu_3=\mu_4=\mu_5\)

against this alternative hypothesis :

\(H_A \colon \) at least one of the \(\mu_i\) differs from the others

In this lesson, we are going to learn how to use a method called analysis of variance to answer the researcher's question. Jumping right to the punch line, with no development or theoretical justification whatsoever, we'll use an analysis of variance table, such as this one:

Analysis of Variance
for comparing all 5 brands
Source DF SS MS F P
Brand  1174.8  293.7  7.95  0.000
Error 45   1661.7  36.9    
Total 49   2836.5      

to draw conclusions about the equality of two or more population means. And, as we always do when performing hypothesis tests, we'll compare the P -value to \(\alpha\), our desired willingness to commit a Type I error. In this case, the researcher's P -value is very small (0.000, to three decimal places), so he should reject his null hypothesis. That is, there is sufficient evidence, at even a 0.01 level, to conclude that the mean stopping distance for at least one brand of tire is different than the mean stopping distances of the others.

So far, we have seen a typical null and alternative hypothesis in the analysis of variance framework, as well as an analysis of variance table. Let's take a look at another example with the idea of continuing to work on developing the basic idea behind the analysis of variance method.

Example 13-2

studying via osmosis

Suppose an education researcher is interested in determining whether a learning method affects students' exam scores. Specifically, suppose she considers these three methods:

  • shock therapy

Suppose she convinces 15 students to take part in her study, so she randomly assigns 5 students to each method. Then, after waiting eight weeks, she tests the students to get exam scores.

What would the researcher's data have to look like to be able to conclude that at least one of the methods yields different exam scores than the others?

Suppose a dot plot of the researcher's data looked like this:

What would we want to conclude? Well, there's a lot of separation in the data between the three methods. In this case, there is little variation in the data within each method, but a lot of variation in the data across the three methods. For these data, we would probably be willing to conclude that there is a difference between the three methods.

Now, suppose instead that a dot plot of the researcher's data looked like this:

What would we want to conclude? Well, there's less separation in the data between the three methods. In this case, there is a lot of variation in the data within each method, and still some variation in the data across the three methods, but not as much as in the previous dot plot. For these data, it is not as obvious that we can conclude that there is a difference between the three methods.

Let's consider one more possible dot plot:

What would we want to conclude here? Well, there's even less separation in the data between the three methods. In this case, there is a real lot of variation in the data within each method, and not much variation at all in the data across the three methods. For these data, we would probably want to conclude that there is no difference between the three methods.

If you go back and look at the three possible data sets, you'll see that we drew our conclusions by comparing the variation in the data within a method to the variation in the data across methods. Let's try to formalize that idea a bit more by revisiting the two most extreme examples. First, the example in which we concluded that the methods differ:

Let's quantify (or are we still just qualifying?) the amount of variation within a method by comparing the five data points within a method to the method's mean, as represented in the plot as a color-coded triangle. And, let's quantify (or qualify?) the amount of variation across the methods by comparing the method means, again represented in the plot as a color-coded triangle, to the overall grand mean, that is, the average of all fifteen data points (ignoring the method). In this case, the variation between the group means and the grand mean is larger than the variation within the groups.

Now, let's revisit the example in which we wanted to conclude that there was no difference in the three methods:

In this case, the variation between the group means and the grand mean is smaller than the variation within the groups.

Hmmm... these two examples suggest that our method should compare the variation between the groups to that of the variation within the groups. That's just what an analysis of variance does!

Let's see what conclusion we draw from an analysis of variance of these data. Here's the analysis of variance table for the first study, in which we wanted to conclude that there was a difference in the three methods:

Source DF SS MS F P
Factor 2  2510.5 1255.3 93.44  0.000
Error 12 161.2  13.4    
Total 14  2671.7      

In this case, the P -value is small (0.000, to three decimal places). We can reject the null hypothesis of equal means at the 0.05 level. That is, there is sufficient evidence at the 0.05 level to conclude that the mean exam scores of the three study methods are significantly different.

Here's the analysis of variance table for the third study, in which we wanted to conclude that there was no difference in the three methods:

Source DF SS MS F P
Factor 2  80.1 40.1 0.46  0.643
Error 12  1050.8 87.6    
Total 14 1130.9      

In this case, the P -value, 0.643, is large. We fail to reject the null hypothesis of equal means at the 0.05 level. That is, there is insufficient evidence at the 0.05 level to conclude that the mean exam scores of the three study methods are significantly different.

Hmmm. It seems like we're on to something! Let's summarize.

The Basic Idea Behind Analysis of Variance

Analysis of variance involves dividing the overall variability in observed data values so that we can draw conclusions about the equality, or lack thereof, of the means of the populations from where the data came. The overall (or " total ") variability is divided into two components:

  • the variability " between " groups
  • the variability " within " groups

We summarize the division of the variability in an " analysis of variance table ", which is often shortened and called an " ANOVA table ." Without knowing what we were really looking at, we looked at a few examples of ANOVA tables here on this page. Let's now go take an in-depth look at the content of ANOVA tables.

13.2 - The ANOVA Table

For the sake of concreteness here, let's recall one of the analysis of variance tables from the previous page:

One-way Analysis of Variance
Source DF SS MS F P
Factor 2  2510.5 1255.3 93.44  0.000
Error 12 161.2  13.4    
Total 14  2671.7      

In working to digest what is all contained in an ANOVA table, let's start with the column headings:

  • Source means "the source of the variation in the data." As we'll soon see, the possible choices for a one-factor study, such as the learning study, are Factor , Error , and Total . The factor is the characteristic that defines the populations being compared. In the tire study, the factor is the brand of tire. In the learning study, the factor is the learning method.
  • DF means "the degrees of freedom in the source."
  • SS means "the sum of squares due to the source."
  • MS means "the mean sum of squares due to the source."
  • F means "the F -statistic."
  • P means "the P -value."

Now, let's consider the row headings:

Sometimes, the factor is a treatment, and therefore the row heading is instead labeled as Treatment . And, sometimes the row heading is labeled as Between to make it clear that the row concerns the variation between the groups.

  • Error means "the variability within the groups" or "unexplained random error." Sometimes, the row heading is labeled as Within to make it clear that the row concerns the variation within the groups.
  • Total means "the total variation in the data from the grand mean" (that is, ignoring the factor of interest).

With the column headings and row headings now defined, let's take a look at the individual entries inside a general one-factor ANOVA table:

Hover over the lightbulb for further explanation.

One-way Analysis of Variance
Source DF SS MS      P
Factor  m-1 SS (Between) MSB  MSB/MSE  0.000
Error  n-m SS (Error) MSE    
Total  n-1 SS (Total)      

Yikes, that looks overwhelming! Let's work our way through it entry by entry to see if we can make it all clear. Let's start with the degrees of freedom ( DF ) column:

  • If there are n total data points collected, then there are n −1 total degrees of freedom.
  • If there are m groups being compared, then there are m −1 degrees of freedom associated with the factor of interest.
  • If there are n total data points collected and m groups being compared, then there are n − m error degrees of freedom.

Now, the sums of squares ( SS ) column:

  • As we'll soon formalize below, SS(Between) is the sum of squares between the group means and the grand mean. As the name suggests, it quantifies the variability between the groups of interest.
  • Again, as we'll formalize below, SS(Error) is the sum of squares between the data and the group means. It quantifies the variability within the groups of interest.

SS(Total) = SS(Between) + SS(Error)

The mean squares ( MS ) column, as the name suggests, contains the "average" sum of squares for the Factor and the Error:

  • The Mean Sum of Squares between the groups, denoted MSB , is calculated by dividing the Sum of Squares between the groups by the between group degrees of freedom. That is, MSB = SS(Between)/( m −1) .
  • The Error Mean Sum of Squares, denoted MSE , is calculated by dividing the Sum of Squares within the groups by the error degrees of freedom. That is, MSE = SS(Error)/( n − m ) .

The F column, not surprisingly, contains the F -statistic. Because we want to compare the "average" variability between the groups to the "average" variability within the groups, we take the ratio of the Between Mean Sum of Squares to the Error Mean Sum of Squares. That is, the F -statistic is calculated as F = MSB/MSE .

When, on the next page, we delve into the theory behind the analysis of variance method, we'll see that the F -statistic follows an F -distribution with m −1 numerator degrees of freedom and n − m denominator degrees of freedom. Therefore, we'll calculate the P -value, as it appears in the column labeled P , by comparing the F -statistic to an F -distribution with m −1 numerator degrees of freedom and n − m denominator degrees of freedom.

Now, having defined the individual entries of a general ANOVA table, let's revisit and, in the process, dissect the ANOVA table for the first learning study on the previous page, in which n = 15 students were subjected to one of m = 3 methods of learning:

One-way Analysis of Variance
Source DF SS MS F P
Factor  2   2510.5 1255.3 93.44  0.000
Error  12  161.2  13.4    
Total 14  2671.7      
  • Because n = 15, there are n −1 = 15−1 = 14 total degrees of freedom.
  • Because m = 3, there are m −1 = 3−1 = 2 degrees of freedom associated with the factor.
  • The degrees of freedom add up, so we can get the error degrees of freedom by subtracting the degrees of freedom associated with the factor from the total degrees of freedom. That is, the error degrees of freedom is 14−2 = 12. Alternatively, we can calculate the error degrees of freedom directly from n − m = 15−3=12.

2671.7 = 2510.5 + 161.2

  • MSB is SS(Between) divided by the between group degrees of freedom. That is, 1255.3 = 2510.5 ÷ 2.
  • MSE is SS(Error) divided by the error degrees of freedom. That is, 13.4 = 161.2 ÷ 12.
  • The F -statistic is the ratio of MSB to MSE. That is, F = 1255.3 ÷ 13.4 = 93.44.
  • The P -value is P ( F (2,12) ≥ 93.44) < 0.001.

Okay, we slowly, but surely, keep on adding bit by bit to our knowledge of an analysis of variance table. Let's now work a bit on the sums of squares.

The Sums of Squares

In essence, we now know that we want to break down the TOTAL variation in the data into two components:

  • a component that is due to the TREATMENT (or FACTOR), and
  • a component that is due to just RANDOM ERROR.

Let's see what kind of formulas we can come up with for quantifying these components. But first, as always, we need to define some notation. Let's represent our data, the group means, and the grand mean as follows:

Group Data Means
1 \(X_{11}\) \(X_{12}\) . . . \(X_{1_{n_1}}\) \(\bar{{X}}_{1.}\)
2 \(X_{21}\) \(X_{22}\) . . . \(X_{2_{n_2}}\) \(\bar{{X}}_{2.}\)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
\(m\) \(X_{m1}\) \(X_{m2}\) . . . \(X_{m_{n_m}}\) \(\bar{{X}}_{m.}\)
    \(\bar{{X}}_{..}\)

That is, we'll let:

  • m denotes the number of groups being compared
  • \(X_{ij}\) denote the \(j_{th}\) observation in the \(i_{th}\) group, where \(i = 1, 2, \dots , m\) and \(j = 1, 2, \dots, n_i\). The important thing to note here... note that j goes from 1 to \(n_i\), not to \(n\). That is, the number of the data points in a group depends on the group i . That means that the number of data points in each group need not be the same. We could have 5 measurements in one group, and 6 measurements in another.
  • \(\bar{X}_{i.}=\dfrac{1}{n_i}\sum\limits_{j=1}^{n_i} X_{ij}\) denote the sample mean of the observed data for group i , where \(i = 1, 2, \dots , m\)
  • \(\bar{X}_{..}=\dfrac{1}{n}\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} X_{ij}\) denote the grand mean of all n data observed data points

Okay, with the notation now defined, let's first consider the total sum of squares , which we'll denote here as SS ( TO ) . Because we want the total sum of squares to quantify the variation in the data regardless of its source, it makes sense that SS ( TO ) would be the sum of the squared distances of the observations \(X_{ij}\) to the grand mean \(\bar{X}_{..}\). That is:

\(SS(TO)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{..})^2\)

With just a little bit of algebraic work, the total sum of squares can be alternatively calculated as:

\(SS(TO)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} X^2_{ij}-n\bar{X}_{..}^2\)

Can you do the algebra?

Now, let's consider the treatment sum of squares , which we'll denote SS ( T ) . Because we want the treatment sum of squares to quantify the variation between the treatment groups, it makes sense that SS ( T ) would be the sum of the squared distances of the treatment means \(\bar{X}_{i.}\) to the grand mean \(\bar{X}_{..}\). That is:

\(SS(T)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (\bar{X}_{i.}-\bar{X}_{..})^2\)

Again, with just a little bit of algebraic work, the treatment sum of squares can be alternatively calculated as:

\(SS(T)=\sum\limits_{i=1}^{m}n_i\bar{X}^2_{i.}-n\bar{X}_{..}^2\)

Finally, let's consider the error sum of squares , which we'll denote SS ( E ). Because we want the error sum of squares to quantify the variation in the data, not otherwise explained by the treatment, it makes sense that SS ( E ) would be the sum of the squared distances of the observations \(X_{ij}\) to the treatment means \(\bar{X}_{i.}\). That is:

\(SS(E)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{i.})^2\)

As we'll see in just one short minute why the easiest way to calculate the error sum of squares is by subtracting the treatment sum of squares from the total sum of squares. That is:

\(SS(E)=SS(TO)-SS(T)\)

Okay, now, do you remember that part about wanting to break down the total variation SS ( TO ) into a component due to the treatment SS ( T ) and a component due to random error SS ( E )? Well, some simple algebra leads us to this:

\(SS(TO)=SS(T)+SS(E)\)

and hence why the simple way of calculating the error of the sum of squares. At any rate, here's the simple algebra:

Well, okay, so the proof does involve a little trick of adding 0 in a special way to the total sum of squares:

\(SS(TO) = \sum\limits_{i=1}^{m}  \sum\limits_{i=j}^{n_{i}}((X_{ij}-\color{red}\overbrace{\color{black}\bar{X}_{i_\cdot})+(\bar{X}_{i_\cdot}}^{\text{Add to 0}}\color{black}-\bar{X}_{..}))^{2}\)

Then, squaring the term in parentheses, as well as distributing the summation signs, we get:

\(SS(TO)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{i.})^2+2\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{i.})(\bar{X}_{i.}-\bar{X}_{..})+\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (\bar{X}_{i.}-\bar{X}_{..})^2\)

Now, it's just a matter of recognizing each of the terms:

\(S S(T O)= \color{red}\overbrace{\color{black}\sum\limits_{i=1}^{m} \sum\limits_{j=1}^{n_{i}}\left(X_{i j}-\bar{X}_{i \cdot}\right)^{2}}^{\text{SSE}} \color{black}+2 \color{red}\overbrace{\color{black}\sum\limits_{i=1}^{m} \sum\limits_{j=1}^{n_{i}}\left(X_{i j}-\bar{X}_{i \cdot}\right)\left(\bar{X}_{i \cdot}-\bar{X}_{. .}\right)}^{\text{O}} \color{black}+ \color{red}\overbrace{\color{black}\left(\sum\limits_{i=1}^{m} \sum\limits_{j=1}^{n_{i}}\left(\bar{X}_{i \cdot}-\bar{X}_{* . *}\right)^{2}\right.}^{\text{SST}}\)

That is, we've shown that:

13.3 - Theoretical Results

So far, in an attempt to understand the analysis of variance method conceptually, we've been waving our hands at the theory behind the method. We can't procrastinate any further... we now need to address some of the theories behind the method. Specifically, we need to address the distribution of the error sum of squares ( SSE ), the distribution of the treatment sum of squares ( SST ), and the distribution of the all-important F -statistic.

The Error Sum of Squares (SSE)

Recall that the error sum of squares:

quantifies the error remaining after explaining some of the variation in the observations \(X_{ij}\) by the treatment means. Let's see what we can say about SSE . Well, the following theorem enlightens us as to the distribution of the error sum of squares.

the \(j^{th}\) measurement of the \(i^{th}\) group, that is, \(X_{ij}\), is an independently and normally distributed random variable with mean \(\mu_i\) and variance \(\sigma^2\)

and \(W^2_i=\dfrac{1}{n_i-1}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{i.})^2\) is the sample variance of the \(i^{th}\) sample

\(\dfrac{SSE}{\sigma^2}\)

follows a chi-square distribution with n−m degrees of freedom.

A theorem we learned (way) back in Stat 414 tells us that if the two conditions stated in the theorem hold, then:

\(\dfrac{(n_i-1)W^2_i}{\sigma^2}\)

follows a chi-square distribution with \(n_{i}−1\) degrees of freedom. Another theorem we learned back in Stat 414 states that if we add up a bunch of independent chi-square random variables, then we get a chi-square random variable with the degrees of freedom added up, too. So, let's add up the above quantity for all n data points, that is, for \(j = 1\) to \(n_i\) and \(i = 1\) to m . Doing so, we get:

\(\sum\limits_{i=1}^{m}\dfrac{(n_i-1)W^2_i}{\sigma^2}=\dfrac{\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} (X_{ij}-\bar{X}_{i.})^2}{\sigma^2}=\dfrac{SSE}{\sigma^2}\)

Because we assume independence of the observations \(X_{ij}\), we are adding up independent chi-square random variables. (By the way, the assumption of independence is a perfectly fine assumption as long as we take a random sample when we collect the data.) Therefore, the theorem tells us that \(\dfrac{SSE}{\sigma^2}\) follows a chi-square random variable with:

\((n_1-1)+(n_2-1)+\cdots+(n_m-1)=n-m\)

degrees of freedom... as was to be proved.

Now, what can we say about the mean square error MSE ? Well, one thing is...

Recall that to show that MSE is an unbiased estimator of \(\sigma^2\), we need to show that \(E(MSE) = \sigma^2\). Also, recall that the expected value of a chi-square random variable is its degrees of freedom. The results of the previous theorem, therefore, suggest that:

\(E\left[ \dfrac{SSE}{\sigma^2}\right]=n-m\)

That said, here's the crux of the proof:

\(E[MSE]=E\left[\dfrac{SSE}{n-m} \right]=E\left[\dfrac{\sigma^2}{n-m} \cdot \dfrac{SSE}{\sigma^2} \right]=\dfrac{\sigma^2}{n-m}(n-m)=\sigma^2\)

The first equality comes from the definition of MSE . The second equality comes from multiplying MSE by 1 in a special way. The third equality comes from taking the expected value of \(\dfrac{SSE}{\sigma^2}\). And, the fourth and final equality comes from simple algebra.

Because \(E(MSE) = \sigma^2\), we have shown that, no matter what, MSE is an unbiased estimator of \(\sigma^2\)... always!

The Treatment Sum of Squares (SST)

Recall that the treatment sum of squares:

\(SS(T)=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i}(\bar{X}_{i.}-\bar{X}_{..})^2\)

quantifies the distance of the treatment means from the grand mean. We'll just state the distribution of SST without proof.

If the null hypothesis:

\(H_0: \text{all }\mu_i \text{ are equal}\)

is true, then:

\(\dfrac{SST}{\sigma^2}\)

follows a chi-square distribution with m −1 degrees of freedom.

When we investigated the mean square error MSE above, we were able to conclude that MSE was always an unbiased estimator of \(\sigma^2\). Can the same be said for the mean square due to treatment MST = SST/ ( m− 1)? Well...

The mean square due to treatment is an unbiased estimator of \(\sigma^2\) only if the null hypothesis is true, that is, only if the m population means are equal.

Since MST is a function of the sum of squares due to treatment SST , let's start with finding the expected value of SST . We learned, on the previous page, that the definition of SST can be written as:

Therefore, the expected value of SST is:

\(E(SST)=E\left[\sum\limits_{i=1}^{m}n_i\bar{X}^2_{i.}-n\bar{X}_{..}^2\right]=\left[\sum\limits_{i=1}^{m}n_iE(\bar{X}^2_{i.})\right]-nE(\bar{X}_{..}^2)\)

Now, because, in general, \(E(X^2)=Var(X)+\mu^2\), we can do some substituting into that last equation, which simplifies to:

\(E(SST)=\left[\sum\limits_{i=1}^{m}n_i\left(\dfrac{\sigma^2}{n_i}+\mu_i^2\right)\right]-n\left[\dfrac{\sigma^2}{n}+\bar{\mu}^2\right]\)

\(\bar{\mu}=\dfrac{1}{n}\sum\limits_{i=1}^{m}n_i \mu_i\)

\(E(\bar{X}_{..})=\dfrac{1}{n}\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} E(X_{ij})=\dfrac{1}{n}\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n_i} \mu_i=\dfrac{1}{n}\sum\limits_{i=1}^{m}n_i \mu_i=\bar{\mu}\)

Simplifying our expectiation yet more, we get:

\(E(SST)=\left[\sum\limits_{i=1}^{m}\sigma^2\right]+\left[\sum\limits_{i=1}^{m}n_i\mu^2_i\right]-\sigma^2-n\bar{\mu}^2\)

And, simplifying yet again, we get:

\(E(SST)=\sigma^2(m-1)+\left[\sum\limits_{i=1}^{m}n_i(\mu_i-\bar{\mu})^2\right]\)

Okay, so we've simplified E ( SST ) as far as is probably necessary. Let's use it now to find E ( MST ).

Well, if the null hypothesis is true, \(\mu_1=\mu_2=\cdots=\mu_m=\bar{\mu}\), say, the expected value of the mean square due to treatment is:

\(E[M S T]=E\left[\frac{S S T}{m-1}\right]=\sigma^{2}+\frac{1}{m-1} \color{red}\overbrace{\color{black}\sum\limits_{i=1}^{m} n_{i}\left(\mu_{i}-\bar{\mu}\right)^{2}}^0 \color{black}=\sigma^{2}\)

On the other hand, if the null hypothesis is not true, that is, if not all of the \(\mu_i\) are equal, then:

\(E(MST)=E\left[\dfrac{SST}{m-1}\right]=\sigma^2+\dfrac{1}{m-1}\sum\limits_{i=1}^{m} n_i(\mu_i-\bar{\mu})^2>\sigma^2\)

So, in summary, we have shown that MST is an unbiased estimator of \(\sigma^2\) if the null hypothesis is true, that is, if all of the means are equal. On the other hand, we have shown that, if the null hypothesis is not true, that is, if all of the means are not equal, then MST is a biased estimator of \(\sigma^2\) because E ( MST ) is inflated above \(\sigma^2\). Our proof is complete.

Our work on finding the expected values of MST and MSE suggests a reasonable statistic for testing the null hypothesis:

\(H_A: \text{at least one of the }\mu_i \text{ differs from the others}\)

\(F=\dfrac{MST}{MSE}\)

Now, why would this F be a reasonable statistic? Well, we showed above that \(E(MSE) = \sigma^2\). We also showed that under the null hypothesis, when the means are assumed to be equal, \(E(MST) = \sigma^2\), and under the alternative hypothesis when the means are not all equal, E ( MST ) is inflated above \(\sigma^2\). That suggests then that:

If the null hypothesis is true, that is, if all of the population means are equal, we'd expect the ratio MST / MSE to be close to 1.

If the alternative hypothesis is true, that is, if at least one of the population means differs from the others, we'd expect the ratio MST / MSE to be inflated above 1.

Now, just two questions remain:

  • Why do you suppose we call MST / MSE an F -statistic?
  • And, how inflated would MST / MSE have to be in order to reject the null hypothesis in favor of the alternative hypothesis?

Both of these questions are answered by knowing the distribution of MST / MSE .

The F-statistic

If \(X_{ij} ~ N(\mu\), \(\sigma^2\)), then:

follows an F distribution with m −1 numerator degrees of freedom and n − m denominator degrees of freedom.

It can be shown (we won't) that SST and SSE are independent. Then, it's just a matter of recalling that an F random variable is defined to be the ratio of two independent chi-square random variables. That is:

\(F=\dfrac{SST/(m-1)}{SSE/(n-m)}=\dfrac{MST}{MSE} \sim F(m-1,n-m)\)

Now this all suggests that we should reject the null hypothesis of equal population means:

if \(F\geq F_{\alpha}(m-1,n-m)\) or if \(P=P(F(m-1,n-m)\geq F)\leq \alpha\)

If you go back and look at the assumptions that we made in deriving the analysis of variance F -test, you'll see that the F -test for the equality of means depends on three assumptions about the data:

  • independence
  • equal group variances

That means that you'll want to use the F -test only if there is evidence to believe that the assumptions are met. That said, as is the case with the two-sample t -test, the F -test works quite well even if the underlying measurements are not normally distributed unless the data are highly skewed or the variances are markedly different. If the data are highly skewed, or if there is evidence that the variances differ greatly, we have two analysis options at our disposal. We could attempt to transform the observations (take the natural log of each value, for example) to make the data more symmetric with more similar variances. Alternatively, we could use nonparametric methods (that are unfortunately not covered in this course).

13.4 - Another Example

Example 13-3.

holocaust memorial in Berlin

A researcher was interested in investigating whether Holocaust survivors have more sleep problems than others. She evaluated \(n = 120\) subjects in total, a subset of them were Holocaust survivors, a subset of them were documented as being depressed, and another subset of them were deemed healthy. (Of course, it's not at all obvious that these are mutually exclusive groups.) At any rate, all n = 120 subjects completed a questionnaire about the quality and duration of their regular sleep patterns. As a result of the questionnaire, each subject was assigned a Pittsburgh Sleep Quality Index (PSQI). Here's a dot plot of the resulting data:

Is there sufficient evidence at the \(\alpha = 0.05\) level to conclude that the mean PSQI for the three groups differ?

We can use Minitab to obtain the analysis of variance table. Doing so, we get:

Source DF SS   MS F  
Factor  2   1723.8 861.9 61.69 0.000
Error  117  1634.8 14.0    
Total 119 3358.6      

Since P < 0.001 ≤ 0.05, we reject the null hypothesis of equal means in favor of the alternative hypothesis of unequal means. There is sufficient evidence at the 0.05 level to conclude that the mean Pittsburgh Sleep Quality Index differs among the three groups.

Minitab ®

Using minitab.

There is no doubt that you'll want to use Minitab when performing an analysis of variance. The commands necessary to perform a one-factor analysis of variance in Minitab depends on whether the data in your worksheet are "stacked" or "unstacked." Let's illustrate using the learning method study data. Here's what the data would look like unstacked:

std1 osm1 shk1
51 58 77
45 68 72
40 64 78
41 63 73
41 62 75

That is, the data from each group resides in a different column in the worksheet. If your data are entered in this way, then follow these instructions for performing the one-factor analysis of variance:

  • Under the Stat menu, select ANOVA .
  • Select One-Way (Unstacked) .
  • In the box labeled Responses , specify the columns containing the data.
  • If you want dot plots and/or boxplots of the data, select Graphs...
  • Select OK .
  • The output should appear in the Session Window.

Here's what the data would look like stacked:

Method Score
1 51
1 45
1 40
1 41
1 41
2 58
2 68
2 64
2 63
2 62
3 77
3 72
3 78
3 73
3 75

That is, one column contains a grouping variable, and another column contains the responses. If your data are entered in this way, then follow these instructions for performing the one-factor analysis of variance:

  • Select One-Way .
  • In the box labeled Response , specify the column containing the responses.
  • In the box labeled Factor , specify the column containing the grouping variable.

Lesson 14: Two-Factor Analysis of Variance

In the previous lesson, we learned how to conduct an analysis of variance in an attempt to learn whether a (that's one!) factor played a role in the observed responses. For example, we investigated whether the learning method (the factor) influenced a student's exam score (the response). We also investigated whether tire brand (the factor) influenced a car's stopping distance (the response).

What happens if we're not interested in whether one factor is associated with the observed responses, but whether two or three or more factors are associated with the observed responses. For example, we might be interested in learning whether smoking history (one factor) and type of stress test (a second factor) are associated with the time until maximum oxygen uptake (the response). That's the kind of data that we'll learn to analyze in this lesson. Specifically, we'll learn how to conduct a two-factor analysis of variance, so that we can test whether either of the two factors or their interaction are associated with some continuous response.

The reality is this online lesson only contains an example of a two-factor analysis of variance. For the theoretical development, you are asked to refer to the textbook chapter on Two-Factor Analysis of Variance. Pedagogically, it is material that lends itself well to getting practice at learning a new statistical method solely from the formal presentation of a statistical textbook.

14.1 - An Example

Example 14-1.

A physiologist was interested in learning whether smoking history and different types of stress tests influence the timing of a subject's maximum oxygen uptake, as measured in minutes. The researcher classified a subject's smoking history as either heavy smoking, moderate smoking, or non-smoking. He was interested in seeing the effects of three different types of stress tests — a test performed on a bicycle, a test on a treadmill, and a test on steps. The physiologist recruited 9 non-smokers, 9 moderate smokers, and 9 heavy smokers to participate in his experiment, for a total of n = 27 subjects. He then randomly assigned each of his recruited subjects to undergo one of the three types of stress tests. Here is his resulting data:

 Smoking History Test
Bicycle (1) Treadmill (2) Step Test (3)
 Nonsmoker (1) 12.8, 13.5, 11.2 16.2, 18.1, 17.8 22.6, 19.3, 18.9
 Moderate (2) 10.9, 11.1, 9.8 15.5, 13.8, 16.2 20.1, 21.0, 15.9
 Heavy (3) 8.7, 9.2, 7.5 14.7, 13.2, 8.1 16.2, 16.1, 17.8

Is there sufficient evidence at the \(\alpha = 0.05\) level to conclude that smoking history has an effect on the time to maximum oxygen uptake? Is there sufficient evidence at the \(\alpha = 0.05\) level to conclude that the type of stress test has an effect on the time to maximum oxygen uptake? And, is there evidence of an interaction between smoking history and the type of stress test?

Let's start by stating our analysis of variance model, as well as any assumptions that we'll make. Let \(X_{ijk}\) denote the time, in minutes, until maximum oxygen uptake for smoking history \(i = 1, 2, 3\), type of test \(j = 1, 2, 3\), and replicate \(k = 1, 2, 3\). So, for example, \(X_{111} = 12.8 , X_{112} = 13.5\), and so on. Let's assume the \(X_{ijk}\) are mutually independent normal random variables with common variance \(\sigma^2\) and mean:

\(\mu_{ij}=\mu+\alpha_i+\beta_j+\gamma_{ij}\)

subject to the following constraints:

\(\sum\limits_{i=1}^a \alpha_i=0\), \(\sum\limits_{j=1}^b \beta_j=0\), \(\sum\limits_{i=1}^a \gamma_{ij}=0\), and \(\sum\limits_{j=1}^b \gamma_{ij}=0\)

In that case, testing whether or not there is an interaction between smoking history and the type of stress test involves testing the null hypothesis:

\(H_0:\gamma_{ij}=0,for\quad i=1,2,3, and \quad j=1,2,3\)

against all of the possible alternatives. We'll definitely want to engage Minitab in conducting the necessary analysis of variance! To do so, we first enter the data into a Minitab worksheet in an unstacked manner . We then do the following:

Under the Stat menu, we select ANOVA , and then Balanced ANOVA... (our data are "balanced" because every cell contains the same number of measurements, 3).

In the pop-up window that appears, we specify the Response and the Model :

You might want to take particular note of the way we specify the interaction between smoking status and the type of test in Minitab, namely, as Smoker*Test.

We select OK , and the resulting output appears in the Session Window.

Here's what the output looks like with the row pertaining to the interaction term highlighted in yellow:

ANOVA, Time versus Smoker, Test
Factor Type Levels Values
Smoker fixed 3 1, 2, 3
Test fixed 3 1, 2, 3
Analysis of Variance for Time
Source DF  SS  MS 
Smoker 84.899  42.449  12.90  0.000
Test 298.072  149.036  45.28  0.000
Smoker*Test 2.815  0.704  0.21  0.927
Error 18  59.247  3.291     
Total 26  445.032       

S = 1.81424       R-Sq = 86.69%     R-Sq (adj) = 80.77%

As you can see, the P -value, 0.927, is very large. We do not reject the null hypothesis that the interaction terms are all zero. That is, there is insufficient evidence at the 0.05 level to conclude that there is an interaction between smoking history and the type of stress test.

Now, testing whether or not smoking history has an effect on the timing of maximum oxygen uptake involves testing the null hypothesis:

\(H_0:\alpha_1=\alpha_2=\alpha_3=0\)

against all of the possible alternatives. Here's what the output looks like with the row pertaining to the smoking history term highlighted in yellow:

As you can see, the P -value is very small (< 0.001). We reject the null hypothesis that the smoking history parameters are all zero. That is, there is sufficient evidence at the 0.05 level to conclude that smoking history has an effect on the timing of maximum oxygen uptake.

Now, testing whether or not the type of stress test has an effect on the timing of maximum oxygen uptake involves testing the null hypothesis:

\(H_0:\beta_1=\beta_2=\beta_3=0\)

against all of the possible alternatives. Here's what the output looks like with the row pertaining to the type of stress test term highlighted in yellow:

As you can see, again, the P -value is very small (< 0.001). We reject the null hypothesis that the stress test parameters are all zero. That is, there is sufficient evidence at the 0.05 level to conclude that the type of stress test has an effect on the timing of maximum oxygen uptake.

In summary, based on these data, the physiologist can conclude that there appears to be an effect due to smoking history and the type of stress test, but that the data do not suggest that the two factors interact in any way.

We were able to include an interaction term in our model in the previous example, because we had multiple observations (three, to be exact) falling in each of the cells. That is, if there is only one observation in each cell, we cannot include an interaction term in our model.

Lesson 15: Tests Concerning Regression and Correlation

In lessons 35 and 36, we learned how to calculate point and interval estimates of the intercept and slope parameters, \(\alpha\) and \(\beta\), of a simple linear regression model:

\(Y_i=\alpha+\beta(x_i-\bar{x})+\epsilon_i\)

with the random errors \(\epsilon_i\) following a normal distribution with mean 0 and variance \(\sigma^2\). In this lesson, we'll learn how to conduct a hypothesis test for testing the null hypothesis that the slope parameter equals some value, \(\beta_0\), say. Specifically, we'll learn how to test the null hypothesis \(H_0:\beta=\beta_0\) using a \(t\)-statistic.

Now, perhaps it is not a point that has been emphasized yet, but if you take a look at the form of the simple linear regression model, you'll notice that the response \(Y\)'s are denoted using a capital letter, while the predictor \(x\)'s are denoted using a lowercase letter. That's because, in the simple linear regression setting, we view the predictors as fixed values, whereas we view the responses as random variables whose possible values depend on the population \(x\) from which they came. Suppose instead that we had a situation in which we thought of the pair \((X_i, Y_i)\) as being a random sample, \(i=1, 2, \ldots, n\), from a bivariate normal distribution with parameters \(\mu_X\), \(\mu_Y\), \(\sigma^2_X\), \(\sigma^2_Y\) and \(\rho\). Then, we might be interested in testing the null hypothesis \(H_0:\rho=0\), because we know that if the correlation coefficient is 0, then \(X\) and \(Y\) are independent random variables. For this reason, we'll learn, not one, but three (!) possible hypothesis tests for testing the null hypothesis that the correlation coefficient is 0. Then, because we haven't yet derived an interval estimate for the correlation coefficient, we'll also take the time to derive an approximate confidence interval for \(\rho\).

15.1 - A Test for the Slope

Once again we've already done the bulk of the theoretical work in developing a hypothesis test for the slope parameter \(\beta\) of a simple linear regression model when we developed a \((1-\alpha)100\%\) confidence interval for \(\beta\). We had shown then that:

\(T=\dfrac{\hat{\beta}-\beta}{\sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}}}\)

follows a \(t_{n-2}\) distribution. Therefore, if we're interested in testing the null hypothesis:

\(H_0:\beta=\beta_0\)

\(H_A:\beta \neq \beta_0\), \(H_A:\beta < \beta_0\), \(H_A:\beta > \beta_0\)

\(t=\dfrac{\hat{\beta}-\beta_0}{\sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}}}\)

Example 15-1

Alligator warning sign

In alligators' natural habitat, it is typically easier to observe the length of an alligator than it is the weight. This data set contains the log weight (\(y\)) and log length (\(x\)) for 15 alligators captured in central Florida. A scatter plot of the data suggests that there is a linear relationship between the response \(y\) and the predictor \(x\). Therefore, a wildlife researcher is interested in fitting the linear model:

\(Y_i=\alpha+\beta x_i+\epsilon_i\)

to the data. She is particularly interested in testing whether there is a relationship between the length and weight of alligators. At the \(\alpha=0.05\) level, perform a test of the null hypothesis \(H_0:\beta=0\) against the alternative hypothesis \(H_A:\beta \neq 0\).

The easiest way to perform the hypothesis test is to let Minitab do the work for us! Under the Stat menu, selecting Regression, and then Regression, and specifying the response logW (for log weight) and the predictor logL (for log length), we get:

The regression equation is logW = - 8.48 + 3.43 logL

Predictor Coef SE Coef T P
Constant -8.4761 0.5007 -16.93 0.000
logL 3.4311 0.1330 25.80 0.000

Analysis of Variance

Source DF SS MS F P
Regression 1 10.064 10.064 665.81    0.000
Residual Error 13 0.196 0.015    

Total

14 10.260

 

   

Easy as pie! Minitab tells us that the test statistic is \(t=25.80\) (in blue ) with a \(p\)-value (0.000) that is less than 0.001. Because the \(p\)-value is less than 0.05, we reject the null hypothesis at the 0.05 level. There is sufficient evidence to conclude that the slope parameter does not equal 0. That is, there is sufficient evidence, at the 0.05 level, to conclude that there is a linear relationship, among the population of alligators, between the log length and log weight.

Of course, since we are learning this material for just the first time, perhaps we could go through the calculation of the test statistic at least once. Letting Minitab do some of the dirtier calculations for us, such as calculating:

\(\sum(x_i-\bar{x})^2=0.8548\)

as well as determining that \(MSE=0.015\) and that the slope estimate = 3.4311, we get:

\(t=\dfrac{\hat{\beta}-\beta_0}{\sqrt{\frac{MSE}{\sum(x_i-\bar{x})^2}}}=\dfrac{3.4311-0}{\sqrt{0.015/0.8548}}=25.9\)

which is the test statistic that Minitab calculated... well, with just a bit of round-off error.

15.2 - Three Tests for Rho

muffins

The hypothesis test for the slope \(\beta\) that we developed on the previous page was developed under the assumption that a response \(Y\) is a linear function of a nonrandom predictor \(x\). This situation occurs when the researcher has complete control of the values of the variable \(x\). For example, a researcher might be interested in modeling the linear relationship between the temperature \(x\) of an oven and the moistness \(y\) of chocolate chip muffins. In this case, the researcher sets the oven temperatures (in degrees Fahrenheit) to 350, 360, 370, and so on, and then observes the values of the random variable \(Y\), that is, the moistness of the baked muffins. In this case, the linear model:

implies that the average moistness:

\(E(Y)=\alpha+\beta x\)

is a linear function of the temperature setting.

There are other situations, however, in which the variable \(x\) is not nonrandom (yes, that's a double negative!), but rather an observed value of a random variable \(X\). For example, a fisheries researcher may want to relate the age \(Y\) of a sardine to its length \(X\). If a linear relationship could be established, then in the future fisheries researchers could predict the age of a sardine simply by measuring its length. In this case, the linear model:

implies that the average age of a sardine, given its length is \(X=x\):

\(E(Y|X=x)=\alpha+\beta x\)

is a linear function of the length. That is, the conditional mean of \(Y\) given \(X=x\) is a linear function. Now, in this second situation, in which both \(X\) and \(Y\) are deemed random, we typically assume that the pairs \((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\) are a random sample from a bivariate normal distribution with means \(\mu_X\) and \(\mu_Y\), variances \(\sigma^2_X\) and \(\sigma^2_Y\), and correlation coefficient \(\rho\). If that's the case, it can be shown that the conditional mean:

must be of the form:

\(E(Y|X=x)=\left(\mu_Y-\rho \dfrac{\sigma_Y}{\sigma_X} \mu_X\right)+\left(\rho \dfrac{\sigma_Y}{\sigma_X}\right)x\)

\(\beta=\rho \dfrac{\sigma_Y}{\sigma_X}\)

Now, for the case where \((X_i, Y_i)\) has a bivariate distribution, the researcher may not necessarily be interested in estimating the linear function:

but rather simply knowing whether \(X\) and \(Y\) are independent. In STAT 414, we've learned that if \((X_i, Y_i)\) follows a bivariate normal distribution, then testing for the independence of \(X\) and \(Y\) is equivalent to testing whether the correlation coefficient \(\rho\) equals 0. We'll now work on developing three different hypothesis tests for testing \(H_0:\rho=0\) assuming \((X_i, Y_i)\) follows a bivariate normal distribution.

A T-Test for Rho

Given our wordy prelude above, this test may be the simplest of all of the tests to develop. That's because we argued above that if \((X_i, Y_i)\) follows a bivariate normal distribution, and the conditional mean is a linear function:

That suggests, therefore, that testing for \(H_0:\rho=0\) against any of the alternative hypotheses \(H_A:\rho\neq 0\), \(H_A:\rho> 0\) and \(H_A:\rho< 0\) is equivalent to testing \(H_0:\beta=0\) against the corresponding alternative hypothesis \(H_A:\beta\neq 0\), \(H_A:\beta<0\) and \(H_A:\beta>0\). That is, we can simply compare the test statistic:

\(t=\dfrac{\hat{\beta}-0}{\sqrt{MSE/\sum(x_i-\bar{x})^2}}\)

to a \(t\) distribution with \(n-2\) degrees of freedom. It should be noted, though, that the test statistic can be instead written as a function of the sample correlation coefficient:

\(R=\dfrac{\dfrac{1}{n-1} \sum\limits_{i=1}^n (X_i-\bar{X}) (Y_i-\bar{Y})}{\sqrt{\dfrac{1}{n-1} \sum\limits_{i=1}^n (X_i-\bar{X})^2} \sqrt{\dfrac{1}{n-1} \sum\limits_{i=1}^n (Y_i-\bar{Y})^2}}=\dfrac{S_{xy}}{S_x S_y}\)

That is, the test statistic can be alternatively written as:

\(t=\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\)

and because of its algebraic equivalence to the first test statistic, it too follows a \(t\) distribution with \(n-2\) degrees of freedom. Huh? How are the two test statistics algebraically equivalent? Well, if the following two statements are true:

\(\hat{\beta}=\dfrac{\dfrac{1}{n-1} \sum\limits_{i=1}^n (X_i-\bar{X}) (Y_i-\bar{Y})}{\dfrac{1}{n-1} \sum\limits_{i=1}^n (X_i-\bar{X})^2}=\dfrac{S_{xy}}{S_x^2}=R\dfrac{S_y}{S_x}\)

\(MSE=\dfrac{\sum\limits_{i=1}^n(Y_i-\hat{Y}_i)^2}{n-2}=\dfrac{\sum\limits_{i=1}^n\left[Y_i-\left(\bar{Y}+\dfrac{S_{xy}}{S_x^2} (X_i-\bar{X})\right) \right]^2}{n-2}=\dfrac{(n-1)S_Y^2 (1-R^2)}{n-2}\)

then simple algebra illustrates that the two test statistics are indeed algebraically equivalent:

\(\displaystyle{t=\frac{\hat{\beta}}{\sqrt{\frac{MSE}{\sum (x_i-\bar{x})^2}}} =\frac{r\left(\frac{S_y}{S_x}\right)}{\sqrt{\frac{(n-1)S^2_y(1-r^2)}{(n-2)(n-1)S^2_x}}}=\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}} \)

Now, for the veracity of those two statements? Well, they are indeed true. The first one requires just some simple algebra. The second one requires a bit of trickier algebra that you'll soon be asked to work through for homework.

An R-Test for Rho

It would be nice to use the sample correlation coefficient \(R\) as a test statistic to test more general hypotheses about the population correlation coefficient:

\(H_0:\rho=\rho_0\)

but the probability distribution of \(R\) is difficult to obtain. It turns out though that we can derive a hypothesis test using just \(R\) provided that we are interested in testing the more specific null hypothesis that \(X\) and \(Y\) are independent, that is, for testing \(H_0:\rho=0\).

Provided that \(\rho=0\), the probability density function of the sample correlation coefficient \(R\) is:

\(g(r)=\dfrac{\Gamma[(n-1)/2]}{\Gamma(1/2)\Gamma[(n-2)/2]}(1-r^2)^{(n-4)/2}\)

over the support \(-1<r<1\).

We'll use the distribution function technique, in which we first find the cumulative distribution function \(G(r)\), and then differentiate it to get the desired probability density function \(g(r)\). The cumulative distribution function is:

\(G(r)=P(R \leq r)=P \left(\dfrac{R\sqrt{n-2}}{\sqrt{1-R^2}}\leq \dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)=P\left(T \leq \dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)\)

The first equality is just the definition of the cumulative distribution function, while the second and third equalities come from the definition of the \(T\) statistic as a function of the sample correlation coefficient \(R\). Now, using what we know of the p.d.f. \(h(t)\) of a \(T\) random variable with \(n-2\) degrees of freedom, we get:

\(G(r)=\int^{\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}}_{-\infty} h(t)dt=\int^{\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}}_{-\infty} \dfrac{\Gamma[(n-1)/2]}{\Gamma(1/2)\Gamma[(n-2)/2]} \dfrac{1}{\sqrt{n-2}}\left(1+\dfrac{t^2}{n-2}\right)^{-\frac{(n-1)}{2}} dt\)

Now, it's just a matter of taking the derivative of the c.d.f. \(G(r)\) to get the p.d.f. \(g(r)\)). Using the Fundamental Theorem of Calculus, in conjunction with the chain rule, we get:

\(g(r)=h\left(\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right) \dfrac{d}{dr}\left(\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)\)

Focusing first on the derivative part of that equation, using the quotient rule, we get:

\(\dfrac{d}{dr}\left[\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right]=\dfrac{(1-r^2)^{1/2} \cdot \sqrt{n-2}-r\sqrt{n-2}\cdot \frac{1}{2}(1-r^2)^{-1/2} \cdot -2r }{(\sqrt{1-r^2})^2}\)

Simplifying, we get:

\(\dfrac{d}{dr}\left[\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right]=\sqrt{n-2}\left[ \dfrac{(1-r^2)^{1/2}+r^2 (1-r^2)^{-1/2} }{1-r^2} \right]\)

Now, if we multiply by 1 in a special way, that is, this way:

\(\dfrac{d}{dr}\left[\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right]=\sqrt{n-2}\left[ \dfrac{(1-r^2)^{1/2}+r^2 (1-r^2)^{-1/2} }{1-r^2} \right]\left(\frac{(1-r^2)^{1/2}}{(1-r^2)^{1/2}}\right) \)

and then simplify, we get:

\(\dfrac{d}{dr}\left[\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right]=\sqrt{n-2}\left[ \dfrac{1-r^2+r^2 }{(1-r^2)^{3/2}} \right]=\sqrt{n-2}(1-r^2)^{-3/2}\)

Now, looking back at \(g(r)\), let's work on the \(h(.)\) part. Replacing the function in the one place where a t appears in the p.d.f. of a \(T\) random variable with \(n-2\) degrees of freedom, we get:

\( h\left(\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)= \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{n-2}{2}\right)}\left(\frac{1}{\sqrt{n-2}}\right)\left[1+\frac{\left(\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)^2}{n-2} \right]^{-\frac{n-1}{2}} \)

Canceling a few things out we get:

\(h\left(\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)=\dfrac{\Gamma[(n-1)/2]}{\Gamma(1/2)\Gamma[(n-2)/2]}\cdot \dfrac{1}{\sqrt{n-2}}\left(1+\dfrac{r^2}{1-r^2}\right)^{-\frac{(n-1)}{2}}\)

Now, because:

\(\left(1+\dfrac{r^2}{1-r^2}\right)^{-\frac{(n-1)}{2}}=\left(\dfrac{1-r^2+r^2}{1-r^2}\right)^{-\frac{(n-1)}{2}}=\left(\dfrac{1}{1-r^2}\right)^{-\frac{(n-1)}{2}}=(1-r^2)^{\frac{(n-1)}{2}}\)

we finally get:

\(h\left(\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}\right)=\dfrac{\Gamma[(n-1)/2]}{\Gamma(1/2)\Gamma[(n-2)/2]}\cdot \dfrac{1}{\sqrt{n-2}}(1-r^2)^{\frac{(n-1)}{2}}\)

We're almost there! We just need to multiply the two parts together. Doing so, we get:

\(g(r)=\left[\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{n-2}{2}\right)}\left(\frac{1}{\sqrt{n-2}}\right)(1-r^2)^{\frac{n-1}{2}}\right]\left[\sqrt{n-2}(1-r^2)^{-3/2}\right]\)

which simplifies to:

over the support \(-1<r<1\), as was to be proved.

Now that we know the p.d.f. of \(R\), testing \(H_0:\rho=0\) against any of the possible alternative hypotheses just involves integrating \(g(r)\) to find the critical value(s) to ensure that \(\alpha\), the probability of a Type I error is small. For example, to test \(H_0:\rho=0\) against the alternative \(H_A:\rho>0\), we find the value \(r_\alpha(n-2)\) such that:

\(P(R \geq r_\alpha(n-2))=\int_{r_\alpha(n-2)}^1 \dfrac{\Gamma[(n-1)/2]}{\Gamma(1/2)\Gamma[(n-2)/2]}(1-r^2)^{\frac{(n-4)}{2}}dr=\alpha\)

Yikes! Do you have any interest in integrating that function? Well, me neither! That's why we'll instead use an \(R\) Table, such as the one we have in Table IX at the back of our textbook.

An Approximate Z-Test for Rho

Okay, the derivation for this hypothesis test is going to be MUCH easier than the derivation for that last one. That's because we aren't going to derive it at all! We are going to simply state, without proof, the following theorem.

The statistic:

\(W=\dfrac{1}{2}\ln\dfrac{1+R}{1-R}\)

follows an approximate normal distribution with mean \(E(W)=\dfrac{1}{2}\ln\dfrac{1+\rho}{1-\rho}\) and variance \(Var(W)=\dfrac{1}{n-3}\).

The theorem, therefore, allows us to test the general null hypothesis \(H_0:\rho=\rho_0\) against any of the possible alternative hypotheses comparing the test statistic:

\(Z=\dfrac{\dfrac{1}{2}ln\dfrac{1+R}{1-R}-\dfrac{1}{2}ln\dfrac{1+\rho_0}{1-\rho_0}}{\sqrt{\dfrac{1}{n-3}}}\)

to a standard normal \(N(0,1)\) distribution.

What? We've looked at no examples yet on this page? Let's take care of that by closing with an example that utilizes each of the three hypothesis tests we derived above.

Example 15-2

Student doing calculus work on paper

An admissions counselor at a large public university was interested in learning whether freshmen calculus grades are independent of high school math achievement test scores. The sample correlation coefficient between the mathematics achievement test scores and calculus grades for a random sample of \(n=10\) college freshmen was deemed to be 0.84.

Does this observed sample correlation coefficient suggest, at the \(\alpha=0.05\) level, that the population of freshmen calculus grades are independent of the population of high school math achievement test scores?

The admissions counselor is interested in testing:

\(H_0:\rho=0\) against \(H_A:\rho \neq 0\)

Using the \(t\)-statistic we derived, we get:

\(t=\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}=\dfrac{0.84\sqrt{8}}{\sqrt{1-0.84^2}}=4.38\)

We reject the null hypothesis if the test statistic is greater than 2.306 or less than −2.306.

Because \(t=4.38>2.306\), we reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the 0.05 level to conclude that the population of freshmen calculus grades are not independent of the population of high school math achievement test scores.

Using the R -statistic , with 8 degrees of freedom, Table IX in the back of the book tells us to reject the null hypothesis if the absolute value of \(R\) is greater than 0.6319. Because our observed \(r=0.84>0.6319\), we again reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the 0.05 level to conclude that freshmen calculus grades are not independent of high school math achievement test scores.

Using the approximate Z -statistic , we get:

\(z=\dfrac{\dfrac{1}{2}ln\left(\dfrac{1+0.84}{1-0.84}\right)-\dfrac{1}{2}ln\left(\dfrac{1+0}{1-0}\right)}{\sqrt{1/7}}=3.23\)

In this case, we reject the null hypothesis if the absolute value of \(Z\) were greater than 1.96. It clearly is, and so we again reject the null hypothesis in favor of the alternative hypothesis. There is sufficient evidence at the 0.05 level to conclude that freshmen calculus grades are not independent of high school math achievement test scores.

15.3 - An Approximate Confidence Interval for Rho

To develop an approximate \((1-\alpha)100\%\) confidence interval for \(\rho\), we'll use the normal approximation for the statistic \(Z\) that we used on the previous page for testing \(H_0:\rho=\rho_0\).

An approximate \((1-\alpha)100\%\) confidence interval for \(\rho\) is \(L\leq \rho \leq U\) where:

\(L=\dfrac{1+R-(1-R)\text{exp}(2z_{\alpha/2}/\sqrt{n-3})}{1+R+(1-R)\text{exp}(2z_{\alpha/2}/\sqrt{n-3})}\)

\(U=\dfrac{1+R-(1-R)\text{exp}(-2z_{\alpha/2}/\sqrt{n-3})}{1+R+(1-R)\text{exp}(-2z_{\alpha/2}/\sqrt{n-3})}\)

We previously learned that:

\(Z=\dfrac{\dfrac{1}{2}ln\dfrac{1+R}{1-R}-\dfrac{1}{2}ln\dfrac{1+\rho}{1-\rho}}{\sqrt{\dfrac{1}{n-3}}}\)

follows at least approximately a standard normal \(N(0,1)\) distribution. So, we can do our usual trick of starting with a probability statement:

\(P\left(-z_{\alpha/2} \leq \dfrac{\dfrac{1}{2}ln\dfrac{1+R}{1-R}-\dfrac{1}{2}ln\dfrac{1+\rho}{1-\rho}}{\sqrt{\dfrac{1}{n-3}}} \leq z_{\alpha/2} \right)\approx 1-\alpha\)

and manipulating the quantity inside the parentheses:

\(-z_{\alpha/2} \leq \dfrac{\dfrac{1}{2}ln\dfrac{1+R}{1-R}-\dfrac{1}{2}ln\dfrac{1+\rho}{1-\rho}}{\sqrt{\dfrac{1}{n-3}}} \leq z_{\alpha/2}\)

to get ..... can you fill in the details?! ..... the formula for a \((1-\alpha)100\%\) confidence interval for \(\rho\):

\(L\leq \rho \leq U\)

\(L=\dfrac{1+R-(1-R)\text{exp}(2z_{\alpha/2}/\sqrt{n-3})}{1+R+(1-R)\text{exp}(2z_{\alpha/2}/\sqrt{n-3})}\) and \(U=\dfrac{1+R-(1-R)\text{exp}(-2z_{\alpha/2}/\sqrt{n-3})}{1+R+(1-R)\text{exp}(-2z_{\alpha/2}/\sqrt{n-3})}\)

as was to be proved!

Example 15-2 (Continued)

student doing calculus work

Estimate the population correlation coefficient \(\rho\) with 95% confidence.

Because we are interested in a 95% confidence interval, we use \(z_{0.025}=1.96\). Therefore, the lower limit of an approximate 95% confidence interval for \(\rho\) is:

\(L=\dfrac{1+0.84-(1-0.84)\text{exp}(2(1.96)/\sqrt{10-3})}{1+0.84+(1-0.84)\text{exp}(2(1.96)/\sqrt{10-3})}=0.447\)

and the upper limit of an approximate 95% confidence interval for \(\rho\) is:

\(U=\dfrac{1+0.84-(1-0.84)\text{exp}(-2(1.96)/\sqrt{10-3})}{1+0.84+(1-0.84)\text{exp}(-2(1.96)/\sqrt{10-3})}=0.961\)

We can be (approximately) 95% confident that the correlation between the population of high school mathematics achievement test scores and freshmen calculus grades is between 0.447 and 0.961. (Not a particularly useful interval, I might say! It might behoove the admissions counselor to collect data on a larger sample, so that he or she can obtain a narrower confidence interval.)

Logo for UH Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Hypothesis Testing with One Sample

Distribution Needed for Hypothesis Testing

OpenStaxCollege

[latexpage]

Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student’s t -distribution . (Remember, use a Student’s t -distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.) We perform tests of a population proportion using a normal distribution (usually n is large or the sample size is large).

If you are testing a single population mean , the distribution for the test is for means :

\(\overline{X}~N\left({\mu }_{X},\frac{{\sigma }_{X}}{\sqrt{n}}\right)\) or \({t}_{df}\)

The population parameter is μ . The estimated value (point estimate) for μ is \(\overline{x}\), the sample mean.

If you are testing a single population proportion , the distribution for the test is for proportions or percentages:

\({P}^{\prime }~N\left(p,\sqrt{\frac{p\cdot q}{n}}\right)\)

The population parameter is p . The estimated value (point estimate) for p is p′ . p′ = \(\frac{x}{n}\) where x is the number of successes and n is the sample size.

Assumptions

When you perform a hypothesis test of a single population mean μ using a Student’s t -distribution (often called a t-test), there are fundamental assumptions that need to be met in order for the test to work properly. Your data should be a simple random sample that comes from a population that is approximately normally distributed . You use the sample standard deviation to approximate the population standard deviation. (Note that if the sample size is sufficiently large, a t-test will work even if the population is not approximately normally distributed).

When you perform a hypothesis test of a single population mean μ using a normal distribution (often called a z -test), you take a simple random sample from the population. The population you are testing is normally distributed or your sample size is sufficiently large. You know the value of the population standard deviation which, in reality, is rarely known.

When you perform a hypothesis test of a single population proportion p , you take a simple random sample from the population. You must meet the conditions for a binomial distribution which are: there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p . The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np and nq must both be greater than five ( np > 5 and nq > 5). Then the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with μ = p and \(\sigma =\sqrt{\frac{pq}{n}}\).

Remember that q = 1 – p .

Chapter Review

In order for a hypothesis test’s results to be generalized to a population, certain requirements must be satisfied.

When testing for a single population mean:

  • A Student’s t -test should be used if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with an unknown standard deviation.
  • The normal test will work if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with a known standard deviation.

When testing a single population proportion use a normal test for a single population proportion if the data comes from a simple, random sample, fill the requirements for a binomial distribution, and the mean number of success and the mean number of failures satisfy the conditions: np > 5 and nq > n where n is the sample size, p is the probability of a success, and q is the probability of a failure.

Formula Review

If there is no given preconceived α , then use α = 0.05.

  • Single population mean, known population variance (or standard deviation): Normal test .
  • Single population mean, unknown population variance (or standard deviation): Student’s t -test .
  • Single population proportion: Normal test .
  • For a single population mean , we may use a normal distribution with the following mean and standard deviation. Means: \(\mu ={\mu }_{\overline{x}}\) and \({\sigma }_{\overline{x}}=\frac{{\sigma }_{x}}{\sqrt{n}}\)
  • A single population proportion , we may use a normal distribution with the following mean and standard deviation. Proportions: µ = p and \(\sigma =\sqrt{\frac{pq}{n}}\).

Which two distributions can you use for hypothesis testing for this chapter?

A normal distribution or a Student’s t -distribution

Which distribution do you use when you are testing a population mean and the standard deviation is known? Assume sample size is large.

Which distribution do you use when the standard deviation is not known and you are testing one population mean? Assume sample size is large.

Use a Student’s t -distribution

A population mean is 13. The sample mean is 12.8, and the sample standard deviation is two. The sample size is 20. What distribution should you use to perform a hypothesis test? Assume the underlying population is normal.

A population has a mean is 25 and a standard deviation of five. The sample mean is 24, and the sample size is 108. What distribution should you use to perform a hypothesis test?

a normal distribution for a single population mean

It is thought that 42% of respondents in a taste test would prefer Brand A . In a particular test of 100 people, 39% preferred Brand A . What distribution should you use to perform a hypothesis test?

You are performing a hypothesis test of a single population mean using a Student’s t -distribution. What must you assume about the distribution of the data?

It must be approximately normally distributed.

You are performing a hypothesis test of a single population mean using a Student’s t -distribution. The data are not from a simple random sample. Can you accurately perform the hypothesis test?

You are performing a hypothesis test of a single population proportion. What must be true about the quantities of np and nq ?

They must both be greater than five.

You are performing a hypothesis test of a single population proportion. You find out that np is less than five. What must you do to be able to perform a valid hypothesis test?

You are performing a hypothesis test of a single population proportion. The data come from which distribution?

binomial distribution

It is believed that Lake Tahoe Community College (LTCC) Intermediate Algebra students get less than seven hours of sleep per night, on average. A survey of 22 LTCC Intermediate Algebra students generated a mean of 7.24 hours with a standard deviation of 1.93 hours. At a level of significance of 5%, do LTCC Intermediate Algebra students get less than seven hours of sleep per night, on average? The distribution to be used for this test is \(\overline{X}\) ~ ________________

  • \(N\left(7.24,\frac{1.93}{\sqrt{22}}\right)\)
  • \(N\left(7.24,1.93\right)\)
  • It is continuous and assumes any real values.
  • The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
  • It approaches the standard normal distribution as n gets larger.
  • There is a “family” of t distributions: every representative of the family is completely defined by the number of degrees of freedom which is one less than the number of data items.

Distribution Needed for Hypothesis Testing Copyright © 2013 by OpenStaxCollege is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

Hypothesis Testing Calculator

$H_o$:
$H_a$: μ μ₀
$n$ =   $\bar{x}$ =   =
$\text{Test Statistic: }$ =
$\text{Degrees of Freedom: } $ $df$ =
$ \text{Level of Significance: } $ $\alpha$ =

Type II Error

$H_o$: $\mu$
$H_a$: $\mu$ $\mu_0$
$n$ =   σ =   $\mu$ =
$\text{Level of Significance: }$ $\alpha$ =

The first step in hypothesis testing is to calculate the test statistic. The formula for the test statistic depends on whether the population standard deviation (σ) is known or unknown. If σ is known, our hypothesis test is known as a z test and we use the z distribution. If σ is unknown, our hypothesis test is known as a t test and we use the t distribution. Use of the t distribution relies on the degrees of freedom, which is equal to the sample size minus one. Furthermore, if the population standard deviation σ is unknown, the sample standard deviation s is used instead. To switch from σ known to σ unknown, click on $\boxed{\sigma}$ and select $\boxed{s}$ in the Hypothesis Testing Calculator.

$\sigma$ Known $\sigma$ Unknown
Test Statistic $ z = \dfrac{\bar{x}-\mu_0}{\sigma/\sqrt{{\color{Black} n}}} $ $ t = \dfrac{\bar{x}-\mu_0}{s/\sqrt{n}} $

Next, the test statistic is used to conduct the test using either the p-value approach or critical value approach. The particular steps taken in each approach largely depend on the form of the hypothesis test: lower tail, upper tail or two-tailed. The form can easily be identified by looking at the alternative hypothesis (H a ). If there is a less than sign in the alternative hypothesis then it is a lower tail test, greater than sign is an upper tail test and inequality is a two-tailed test. To switch from a lower tail test to an upper tail or two-tailed test, click on $\boxed{\geq}$ and select $\boxed{\leq}$ or $\boxed{=}$, respectively.

Lower Tail Test Upper Tail Test Two-Tailed Test
$H_0 \colon \mu \geq \mu_0$ $H_0 \colon \mu \leq \mu_0$ $H_0 \colon \mu = \mu_0$
$H_a \colon \mu $H_a \colon \mu \neq \mu_0$

In the p-value approach, the test statistic is used to calculate a p-value. If the test is a lower tail test, the p-value is the probability of getting a value for the test statistic at least as small as the value from the sample. If the test is an upper tail test, the p-value is the probability of getting a value for the test statistic at least as large as the value from the sample. In a two-tailed test, the p-value is the probability of getting a value for the test statistic at least as unlikely as the value from the sample.

To test the hypothesis in the p-value approach, compare the p-value to the level of significance. If the p-value is less than or equal to the level of signifance, reject the null hypothesis. If the p-value is greater than the level of significance, do not reject the null hypothesis. This method remains unchanged regardless of whether it's a lower tail, upper tail or two-tailed test. To change the level of significance, click on $\boxed{.05}$. Note that if the test statistic is given, you can calculate the p-value from the test statistic by clicking on the switch symbol twice.

In the critical value approach, the level of significance ($\alpha$) is used to calculate the critical value. In a lower tail test, the critical value is the value of the test statistic providing an area of $\alpha$ in the lower tail of the sampling distribution of the test statistic. In an upper tail test, the critical value is the value of the test statistic providing an area of $\alpha$ in the upper tail of the sampling distribution of the test statistic. In a two-tailed test, the critical values are the values of the test statistic providing areas of $\alpha / 2$ in the lower and upper tail of the sampling distribution of the test statistic.

To test the hypothesis in the critical value approach, compare the critical value to the test statistic. Unlike the p-value approach, the method we use to decide whether to reject the null hypothesis depends on the form of the hypothesis test. In a lower tail test, if the test statistic is less than or equal to the critical value, reject the null hypothesis. In an upper tail test, if the test statistic is greater than or equal to the critical value, reject the null hypothesis. In a two-tailed test, if the test statistic is less than or equal the lower critical value or greater than or equal to the upper critical value, reject the null hypothesis.

Lower Tail Test Upper Tail Test Two-Tailed Test
If $z \leq -z_\alpha$, reject $H_0$. If $z \geq z_\alpha$, reject $H_0$. If $z \leq -z_{\alpha/2}$ or $z \geq z_{\alpha/2}$, reject $H_0$.
If $t \leq -t_\alpha$, reject $H_0$. If $t \geq t_\alpha$, reject $H_0$. If $t \leq -t_{\alpha/2}$ or $t \geq t_{\alpha/2}$, reject $H_0$.

When conducting a hypothesis test, there is always a chance that you come to the wrong conclusion. There are two types of errors you can make: Type I Error and Type II Error. A Type I Error is committed if you reject the null hypothesis when the null hypothesis is true. Ideally, we'd like to accept the null hypothesis when the null hypothesis is true. A Type II Error is committed if you accept the null hypothesis when the alternative hypothesis is true. Ideally, we'd like to reject the null hypothesis when the alternative hypothesis is true.

Condition
$H_0$ True $H_a$ True
Conclusion Accept $H_0$ Correct Type II Error
Reject $H_0$ Type I Error Correct

Hypothesis testing is closely related to the statistical area of confidence intervals. If the hypothesized value of the population mean is outside of the confidence interval, we can reject the null hypothesis. Confidence intervals can be found using the Confidence Interval Calculator . The calculator on this page does hypothesis tests for one population mean. Sometimes we're interest in hypothesis tests about two population means. These can be solved using the Two Population Calculator . The probability of a Type II Error can be calculated by clicking on the link at the bottom of the page.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

How to Identify the Distribution of Your Data

By Jim Frost 111 Comments

You’re probably familiar with data that follow the normal distribution. The normal distribution is that nice, familiar bell-shaped curve. Unfortunately, not all data are normally distributed or as intuitive to understand. You can picture the symmetric normal distribution, but what about the Weibull or Gamma distributions? This uncertainty might leave you feeling unsettled. In this post, I show you how to identify the probability distribution of your data.

You might think of nonnormal data as abnormal. However, in some areas, you should actually expect nonnormal distributions. For instance, income data are typically right skewed. If a process has a natural limit, data tend to skew away from the limit. For example, purity can’t be greater than 100%, which might cause the data to cluster near the upper limit and skew left towards lower values. On the other hand, drill holes can’t be smaller than the drill bit. The sizes of the drill holes might be right-skewed away from the minimum possible size.

Data that follow any probability distribution can be valuable. However, many people don’t feel as comfortable with nonnormal data. Let’s shed light on how to identify the distribution of your data!

We’ll learn how to identify the probability distribution using body fat percentage data from middle school girls that I collected during an experiment. You can download the CSV data file: body_fat .

Related posts : Understanding Probability Distributions  and The Normal Distribution

Graph the Raw Data

Let’s plot the raw data to see what it looks like.

Histogram displays a right skewed distribution for the body fat data. We want to identify the distribution of these data.

The histogram gives us a good overview of the data. At a glance, we can see that these data clearly are not normally distributed. They are right skewed. The peak is around 27%, and the distribution extends further into the higher values than to the lower values. Learn more about skewed distributions . Histograms can also identify bimodal distributions .

These data are not normal, but which probability distribution do they follow? Fortunately, statistical software can help us!

Related posts : Using Histograms to Understand Your Data , Dot Plots: Using, Examples, and Interpreting , and Assessing Normality: Histograms vs. Normal Probability Plots

Using Distribution Tests to Identify the Probability Distribution that Your Data Follow

Distribution goodness-of-fit tests are hypothesis tests that determine whether your sample data were drawn from a population that follows a hypothesized probability distribution. Like any statistical hypothesis test , distribution tests have a null hypothesis and an alternative hypothesis.

  • H 0 : The sample data follow the hypothesized distribution.
  • H 1 : The sample data do not follow the hypothesized distribution.

For distribution goodness-of-fit tests, small p-values indicate that you can reject the null hypothesis and conclude that your data were not drawn from a population with the specified distribution. However, we want to identify the probability distribution that our data follow rather than the distributions they don’t follow! Consequently, distribution tests are a rare case where you look for high p-values to identify candidate distributions. Learn more about Goodness of Fit: Definition & Tests .

Before we test our data to identify the distribution, here are some measures you need to know:

Anderson-Darling statistic (AD): There are different distribution tests. The test I’ll use for our data is the Anderson-Darling test. The Anderson-Darling statistic is the test statistic. It’s like the t-value for t-tests or the F-value for F-tests . Typically, you don’t interpret this statistic directly, but the software uses it to calculate the p-value for the test.

P-value: Distribution tests that have high p-values are suitable candidates for your data’s distribution. Unfortunately, it is not possible to calculate p-values for some distributions with three parameters.

LRT P: If you are considering a three-parameter distribution, assess the LRT P to determine whether the third parameter significantly improves the fit compared to the associated two-parameter distribution. An LRT P value that is less than your significance level indicates a significant improvement over the two-parameter distribution. If you see a higher value, consider staying with the two-parameter distribution.

Note that this example covers continuous data. For categorical and discrete variables, you should use the chi-square goodness of fit test .

Goodness of Fit Test Results for the Distribution Tests

I’m using Minitab, which can test 14 probability distributions and two transformations all at once. Let’s take a look at the output below. We’re looking for higher p-values in the Goodness-of-Fit Test table below.

Table of goodness-of-fit results for the distribution tests. The top candidates are highlighted.

As we expected, the Normal distribution does not fit the data. The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.

The Box-Cox transformation and the Johnson transformation both have high p-values. If we need to transform our data to follow the normal distribution, the high p-values indicate that we can use these transformations successfully. However, we’ll disregard the transformations because we want to identify our probability distribution rather than transform it.

The highest p-value is for the three-parameter Weibull distribution (>0.500). For the three-parameter Weibull, the LRT P is significant (0.000), which means that the third parameter significantly improves the fit.

The lognormal distribution has the next highest p-value of 0.345.

Let’s consider the three-parameter Weibull distribution and lognormal distribution to be our top two candidates.

Related post : Understanding the Weibull Distribution

Using Probability Plots to Identify the Distribution of Your Data

Probability plots might be the best way to determine whether your data follow a particular distribution. If your data follow the straight line on the graph, the distribution fits your data. This process is simple to do visually. Informally, this process is called the “fat pencil” test. If all the data points line up within the area of a fat pencil laid over the center straight line, you can conclude that your data follow the distribution.

Probability plots are also known as quantile-quantile plots, or Q-Q plots. These plots are similar to Empirical CDF plots except that they transform the axes so the fitted distribution follows a straight line.

Q-Q plots are especially useful in cases where the distribution tests are too powerful. Distribution tests are like other hypothesis tests. As the sample size increases, the statistical power of the test also increases. With very large sample sizes, the test can have so much power that trivial departures from the distribution produce statistically significant results. In these cases, your p-value will be less than the significance level even when your data follow the distribution.

The solution is to assess Q-Q plots to identify the distribution of your data. If the data points fall along the straight line, you can conclude the data follow that distribution even if the p-value is statistically significant. Learn more about QQ Plots: Uses, Benefits & Interpreting .

The probability plots below include the normal distribution, our top two candidates, and the gamma distribution.

Probability plot the compares the fit of distributions to help us identify the distribution of our data.

The data points for the normal distribution don’t follow the center line. However, the data points do follow the line very closely for both the lognormal and the three-parameter Weibull distributions. The gamma distribution doesn’t follow the center line quite as well as the other two, and its p-value is lower. Again, it appears like the choice comes down to our top two candidates from before. How do we choose?

An Additional Consideration for Three-Parameter Distributions

Three-parameter distributions have a threshold parameter. The threshold parameter is also known as the location parameter. This parameter shifts the entire distribution left and right along the x-axis. The threshold/location parameter defines the smallest possible value in the distribution. You should use a three-parameter distribution only if the location truly is the lowest possible value. In other words, use subject-area knowledge to help you choose.

The threshold parameter for our data is 16.06038 (shown in the table below). This cutoff point defines the smallest value in the Weibull distribution. However, in the full population of middle school girls, it is unlikely that there is a strict cutoff at this value. Instead, lower values are possible even though they are less likely. Consequently, I’ll pick the lognormal distribution.

Related post : Understanding the Lognormal Distribution

Parameter Values for Our Distribution

We’ve identified our distribution as the lognormal distribution. Now, we need to find the parameter values for it. Population parameters are the values that define the shape and location of the distribution. We just need to look at the distribution parameters table below!

Table of estimated distribution parameters for a variety of distributions.

Our body fat percentage data for middle school girls follow a lognormal distribution with a location of 3.32317 and a scale of 0.24188.

Below, I created a probability distribution plot of our two top candidates using the parameter estimates. It displays the probability density functions for these distributions. You can see how the three-parameter Weibull distribution stops abruptly at the threshold/location value. However, the lognormal distribution continues to lower values.

Probability distribution plot that compares the three-parameter Weibull to the lognormal distribution to help us identify the distribution of our data.

Identifying the probability distribution that your data follow can be critical for analyses that are very sensitive to the distribution, such as capability analysis. In a future blog post, I’ll show you what else you can do by simply knowing the distribution of your data. This post is all continuous data and continuous probability distributions. If you have discrete data, read my post about Goodness-of-Fit Tests for Discrete Distributions .

Finally, I’ll close this post with a graph that compares the raw data to the fitted distribution that we identified.

Histogram that compares the raw data to the lognormal distribution that we identified.

Note: I wrote a different version of this post that appeared elsewhere. I’ve completely rewritten and updated it for my blog site.

Share this:

hypothesis testing known distribution

Reader Interactions

' src=

April 20, 2024 at 9:02 pm

if my data is very big how i can distribute and what is the techniques. plz briefly describe?

' src=

April 20, 2024 at 9:44 pm

Hi Kalywan,

It’s not entirely clear what you’re asking exactly. I think you’re asking about how to identify the distribution of your data when you have a very large dataset.

So, a refresher, the problem with using distribution tests to identify the distribution of a very large dataset is that they’ll have very high statistical power and they’ll find trivial departures from a distribution to be statistically significant. That makes it hard to identify it!

In those cases, I recommend using QQ Plots . Click the link to learn more.

' src=

December 14, 2023 at 5:24 am

Hello How to know the distribution when we are dealing with censored data Please let me know Thank You

December 14, 2023 at 5:23 am

Hello My data is data [6,10,3,2,16,1,17,11,4,5] and I want to know the distribution and estimate the parameters for this. As these points are completely random and they don’t seem to follow any particular distribution, I tried using non-parametric method (KDE) and obtained the KDE plot but not able to understand how to interpret that plot and how to proceed with estimating the parameters from here. Please help me proceed further with this or if there is any other way or method to deal with this problem. Please let me know Thank You

' src=

April 19, 2023 at 4:44 am

Hello Jim, How can a test in SAS for different distributions ? like you are doing in minitab . thank you very much

' src=

September 29, 2022 at 7:39 pm

Hi Jim, thanks for your answer!!!

Yes my data is of purity and the UCL is 98.5 bacause this is the acceptance limit, values lower than 98.5 are not expected and they must have a justification of why that happened and in the data set I have 60 values that are lower than the UCL and most of them are higher than 97, around 40 samples, and only 9 are lower than 95 being 84 the lowest value.

If a remove this data that are justified of my analysis, would it bias the analyses?

About the p-value you are right, it was a typo I am looking for p-value greater than 0.05

I checked de probability plots and none of them were anywhere close to follow a straight line. I guess this is happening because of the high skewness.

If you could help with this problem, I would apreciate it so much.

September 29, 2022 at 8:22 pm

Hi Gustavo,

Ah, so that’s NOT an Upper Control Limit (UCL) then. It’s actually the LOWER control limit or LCL. That’s what confused me.

Assuming that the data below the LCL is correct—that is, they are out of spec, but the measurement is valid—then you should leave them in your dataset. However, if you have reason to believe that the measurements themselves are not valid due to some error, then you can take them out. But if they’re valid, they represent part of the distribution of outcomes and you should leave them in.

The skewness by itself isn’t the problem because some probability distributions can fit skewed data. It’s probably the specific shape of the skewness that is cause problems. The probability plots transform the axes so even skewed data can follow the straight line on the graph. They don’t have to follow the line perfectly. Do the “fat pencil test” that I describe here (I’m talking normal distributions there, but it also applies to probability (Q-Q) plots for other distributions).

I’m assuming that you also checked to see if any transformations can make your data normal? If not, look for that. But I’m guessing you did because it’s right there in the output with the other distributions.

Also, did you check to be sure that your data are in statistical control using a control chart? Out of control data can cause problems like this.

If all the above checks out, then it gets tougher.

I’d do some research and see what others in a similar subject area have done. Someone else might have figured it out!

If that doesn’t work, you might need to look into other methods. These methods I’ve heard of but I’m not overly familiar with. These would be things like nonparametric or bootstrapped capability analysis. Those methods should be able to handle data that don’t have an identifiable distribution. Unfortunately, Minitab can’t do those types.

Unfortunately, that’s all I’ve got! Hopefully, one of those suggestions work.

September 28, 2022 at 7:33 pm

Hi, i read your post and it was very helpful, but I am still having some troubles while analyzing my data. My data set is from process yield in % and the closer the to 100% the better, the data set has around 1100 samples and only 60 of them are smaller than 98,5, that is my UCL, so my data is highly skewed to left (skewness = -8) and I would like to run a capability test, but as I do not find a suitable distribution to my data set I think that the capability test may give some inconsistent results.

When I run a probabiliy distribution test in minitab, no distribution gives me a p-value greater than 0.005. So what should I do?

When I have a distribution that have a natural limit, as 100% is the max value I can get in a probability test, which approach should I have to treat or anylise the data?

September 29, 2022 at 6:45 pm

If close to 100% is better, why is the UCL at only 98.5%? Are these purity measurements by any chance? Those tend to be left-skewed when 100% is desired.

Also, just be clear, you state you’re looking for a p-value greater than 0.005, but that should be 0.05.

Here’s one possibility to consider. You have a very large sample size with n=1100. That gives these distribution tests very high statistical power. Consequently, they can detect trivial departures from any given probability distribution. Check the probability plots and see if they tend to follow the straight line. That’s the approach I recommend, particularly for very large samples like yours. I talk about that in this post in the section titled, “Using Probability Plots . . .”. If the dots follow the line fairly well, go with that distribution despite the p-value.

If you still can’t identify a distribution, let me know and I’ll think about other possibilities.

For capability analysis, choosing the distribution correct matters. Using the wrong one will definitely mess up the results! Capability analysis is sensitive to that.

' src=

August 29, 2022 at 3:04 pm

Hi Jim, It is interesting to see that Minitab tests 14 possible distributions! But as I understand it, there’s more than one “version” of any given distribution—for example a “normal” distribution is a bell-shaped curve, but may be slightly taller, or slightly wider and fatter than some other normal distribution curve, and still be “normal” within limits at least. My question is, in your body fat data for example, if you sampled body fat at a different school and you still got a lognormal curve but one that was wider and not quite as tall, would the probability of having a value between 20 and 24% (for example) still be the same? Or would it vary based on how squeezed or stretched your lognormal curve is (while still being lognormal)? Does the software compute this based on some ideal lognormal curve or use the actual data?

August 29, 2022 at 3:39 pm

That’s a great question.

The first thing that I’d point out is that there is not one Normal distribution or any other distribution. There are an infinite number of normal distributions. They share some characteristics, such as being symmetrical, having a single peak in the center, and tapers off equally both directions from the mean. However, they can be taller and narrower or shorter and wider. And have the majority of their values fall in entirely different places than other normal distributions. The same concept applies to lognormal and other distributions.

For this reason, I try to say things like, the data follow A normal distribution. Or A lognormal distribution. Rather than saying the data follow the normal or lognormal distribution because there isn’t one of each. Instead, the body fat percentage data follow a lognormal distribution with specific parameters.

To answer your question, yes, if I had taken a sample from another school, I would’ve likely gotten a slightly different distribution. It could’ve been squeeze or stretched a bit as you describe. That other lognormal distribution would’ve produced a somewhat different probability for values falling within that interval. Like many things we do in statistics, we’re using samples to estimate populations. A key notion in inferential statistics is that those estimates will vary from sample to sample. The quality of our estimates depends on how good our sample is. How large is it? Does it represent the population? Etc.

The software estimates these parameters using maximum likelihood estimation (MLE). Likelihood estimation is a process that calculates how likely a population with particular parameters is to produce a random sample with your sample’s properties. Maximizing that likelihood function simply means choosing the population parameters that are MOST likely to have produced your sample’s characteristics. It performs that process for all distributions. Then you need to determine which distribution best fits your data.

So, with this example, we end up with a lognormal distribution providing the best fit with specific parameters that were most likely to produce our sample.

' src=

August 22, 2022 at 7:03 am

I’ve read through you forum and I purchased two of your books – and everything is fantastic, thank you for all the great information!

However, I do have a question (which popped up during my scientific research) that I couldn’t seem to find an answer to: how do I interpret the results of OLS when the dependent variable has been transformed using a Box-Cox transformation? (It was necessary, since the residuals were extremely non-normal, but this fixed the issue)

More specifically, I’m looking to answer the following questions: 1) Do my independent variables have a significant effect on the dependent variable?

2) What’s the direction of the effect of my significant independent variables (positive/negative)?

3) What’s the order of my independent variables by strength of effect? (e.g.: Which independent variable has the strongest effect, and which one has the weakest effect?)

Please note, that I’m not trying to build a predictive model – I just want to know what the important independent variables are in my model, their direction of effect, and their ordinal strength (strongest – 2nd strongest – … – 2nd weakest – weakest). Also, when looking at their “ordinal strength” (my own words haha), I’m assuming correctly that I should be looking at their standardized coefficients, right? Or, for this purpose, is the normality of my residuals important at all? The significant independent variables do change after the Box-Cox transformation of the dependent variable, I just don’t know which model (transformed or untransformed DV) answers my research questions better… Sorry for the long post, keep up the good work! Thanks!

August 22, 2022 at 5:12 pm

Thanks for writing and thanks so much for buying two of my books. If you happen to have bought my regression book, go to Chapter 9 and look for a section titled, “Using Data Transformations to Fix Problems.” There’s a sub-section in it about “How to Interpret the Results for Transformed Data.” I think the entire section will be helpful for you but particularly the interpretation one.

In the transformations section, I note how transformations are my solution of last resort. They can fix problems but, as you’re finding, it complicates interpretation. So, you have non-normal residuals. Hopefully you’ve tried other solutions for fix that, such as specifying a better model. For example, one that properly models curvature. However, sometimes that’s just not possible. In that case, a transformation might be the best choice possible. Another option would be trying a generalized linear model that doesn’t necessarily require your residuals to follow a normal distribution but allows them to follow other distributions.

But back to transformations. If you’re stuck using a transformation, the results apply to the transformed data, and you need to describe them that way. For example, you might say there is a significant, positive relationship between the predictor and the Box-Cox transformed response variable. And in that case, the coefficients explain changes in the transformed response. It’s just not as intuitive understanding what the results mean. Some software can automatically back transform the numbers to help you understand some of it, but you’re not really seeing the true relationships.

Because you are developing a predictive model, many of these concerns are lessened for you because you don’t need to understand the explanatory roles of each variable. However, you will still need to back transform the predicted values. The predicted values you get “out of the box” will be in transformed units. Additionally, the margin of error (prediction intervals) might be drastically different depending on your predictor values. The transformation will make the transformed prediction intervals nice and constant, but that won’t necessarily be true for the back transformed PIs. So, you’ll need to back transform those too. Again, some software does that automatically. I know Minitab statistical software does that.

Understanding the predictive power of each predictor is complicated by the transformation because the standardized coefficients apply to the transformed data. In non-transformed models, you’re correct that standardized coefficients are a good measure to consider. Another good measure is the change in R-squared when the variable is added to the model last. However, the R-squared for your model applies to the transformed DV.

I guess for your overall question about how essential the transformation is to use, like many things in statistics, it depends on all the details. If your residuals are severely non-normal, then it’s important. However, if they’re only mildly non-normal, not so much. What I’d do is graph your residuals using a normal probability plot (aka a Q-Q plot) and use the “fat pencil test” I describe in the linked post. BTW, that post is about Q-Q plots and data distributions but apply to your residuals as well.

I hope that helps clarify some of the issues! Transformations can help matters but they do cause complications for interpretation.

' src=

July 25, 2022 at 1:09 pm

6) As a proxy for exposure to benzene (a known human carcinogen) you collect 30 samples (one sample from 30 individuals who work at an oil refinery) looking for phenol in the urine. The measure is usually reported as mg/g of creatinine. The mean concentration of all the samples was 252.5 mg/g of creatinine. This is worrying to you because you know that values above 250 indicate an overexposure to benzene. You look at the descriptive statistics and find that the standard deviation in the sample is 75, the range is 500 (2-502), and the interquartile range is 50 (57-107) a. Looking at the standard deviation, range, and IQR what do you suspect about the distribution of the data? b. What is the standard error of the mean for this sample? c. What is the 95% confidence interval of the mean? d. In your own words, what can you say about the sample you have collected with respect to the mean you have calculated, the 95% CI, and the levels at which we become concerned about overexposure (250mg/g creatinine).

July 25, 2022 at 7:10 pm

I’m not going to answer your homework question for you, but I’ll provide some suggestions and posts I’ve written that will help you answer them yourself. That’s the path to true learning!

One key thing you need to do is determine the general shape of your distribution. At the most basic level, that means determining whether it is symmetrical (e.g., normally distributed) or skewed. That’s easy if you have the data and can graph it. However, if you just have the summary statistics, you can still draw some conclusions. For tips on determining the shape of the distribution, read my post about Skewed Distributions . To help answer that, you’ll need to know what the median is and compare it to the mean. If the median is not provided, you know it falls somewhere within the IQR. I’ll give you the hint that you have reason to believe it is skewed and not normal. Or the dataset might contain one or more extreme outliers.

Read Standard error of the mean to see how to calculate and interpret it.

Learn how to use the SEM to calculate and interpret the 95% Confidence Interval .

By understanding the IQR and quartiles , you can determine what percentage of the sample is below 107 (the upper IQR value).

I hope that helps!

' src=

September 18, 2021 at 3:12 am

All (30) data points are 6.5 & 6.6. P-Value is <0.005 Individual distribution shows P-values as <0.005 & <0.010. How to choose distribution (non-normal) for calculating process capabilities? Below are the values for reference.

Goodness of Fit Test Distribution AD P LRT P Normal 12.101 <0.005 Box-Cox Transformation 12.101 <0.005 Lognormal 12.101 <0.005 Exponential 22.715 <0.003 2-Parameter Exponential 14.362 <0.010 0.000 Weibull 14.524 <0.010 Smallest Extreme Value 14.524 <0.010 Largest Extreme Value 11.028 <0.010 Gamma 12.246 <0.005 Logistic 11.973 <0.005 Loglogistic 11.973 <0.005

ML Estimates of Distribution Parameters Distribution Location Shape Scale Threshold Normal* 6.57600 0.04314 Box-Cox Transformation* 12302.42739 397.08665 Lognormal* 1.88341 0.00659 Exponential 6.57600 2-Parameter Exponential 0.07755 6.49845 Weibull 278.12723 6.59360 Smallest Extreme Value 6.59364 0.02355 Largest Extreme Value 6.55247 0.04785 Gamma 23582.81096 0.00028 Logistic 6.58577 0.02289 Loglogistic 1.88490 0.00350 * Scale: Adjusted ML estimate

Your response is much appreciated..

September 19, 2021 at 12:55 am

Hi Nishanth,

That’s a tough dataset you have! The p-values are all significant, which indicates that none of the distributions fit. However, I notice you don’t some of the distributions with more parameters (e.g., three parameter Weibull, two parameter exponential, etc.) You should check those. Also the Johnson transformation is not included.

If you can’t find any distribution that the data fit, or get a successful transformation, you might need a nonparametric approach. Or a bootstrapping approach. Unfortunately, your data just don’t follow any of the listed distributions!

' src=

August 25, 2021 at 2:50 pm

There is a mention that “The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.”

Can the above mention be rephrased that if the p-value is greater than 0.005, it can make sure that the actual data follow the null hypothesis?

I would like to get to know the difference between the statements of “we can follow the null hypothesis” and “we failed to reject the null hypothesis”.

August 26, 2021 at 2:51 am

Thanks for writing with your great question!

First, I should clarify that correct cutoff value is 0.05. When the p-value is less than or equal to 0.05 for a normality test, we can reject the null hypothesis and conclude that the data do not follow a normal distribution.

Distribution tests are unusual for hypothesis tests. For almost all other tests, we want p-values to be low and significant, and draw conclusions when they are. However, for distribution tests, it’s a good sign when p-values are high. We fail to reject the null.

However, we never say that we accept the null. Why not? Well, it has to do with being unable to prove a negative. All we can say is that we have not seen evidence that the data do not follow the normal distribution. However, we can never prove that negative. Perhaps our sample size is too small to detect the difference or the data are too noisy? I write a post about this very issue that you should read: Failing to Reject the Null Hypothesis . That should help you understand why that is the correct wording!

' src=

August 11, 2021 at 11:16 am

How does we identify the data follow binomial or Poisson or other distribution rather than follow normal or not normal?

August 11, 2021 at 5:47 pm

Hi Gemechu,

That’s a great question. I’ve written a post that covers exactly that and I discuss both the binomial and Poisson distributions, along with others. Please read my post, Goodness-of-Fit Tests for Discrete Distributions .

' src=

May 29, 2021 at 8:31 am

Hi Jim, if a dataset has a skewness of -0.3, can we still consider it to be approximately normally distributed? Is the jacque bera test a good way to verify if the distribution of a dataset is ‘normal’? Thank you.

' src=

May 8, 2021 at 2:33 am

if my data is not following any distribution. can i say it is approximately following a Weibull distribution using the probability plot? if yes, can you share any reference document.

May 9, 2021 at 9:15 pm

Hi Mounika,

If your data are not following any distribution, I’m not sure why you’d be able to say it’s following a Weibull distribution? Are you saying that the p-value is significant but the dots on the probability plot following the straight line? It’s hard to tell from what you wrote. If that’s the case, you can conclude that the data follow the distribution. That usually happens when you have a large dataset.

' src=

April 16, 2021 at 9:10 pm

if the continuous data fits other distribution type than the normal distribution, say weibull making it a parametric test, can we do anova similarly like the normal distribution?

April 16, 2021 at 11:47 pm

Hi Shamshul,

Generally speaking, when we are talking about parametric tests, they assume that the data follow the normal distribution specifically. There are exceptions, but ANOVA does assume normality. However, when your data exceed a certain sample size, these analyses are valid with nonnormal data. For more information about this and a table with the sample sizes, please see my post about parametric vs. nonparametric analyses . I include ANOVA in that table.

' src=

January 28, 2021 at 10:57 am

hope you are doing great. In your hypothesis test ebook, you clearly expressed the no need to worry about the normality assumption provided the data is large. Now I see you emphasising the need to determine the distribution of the data. Understand what circumstances do I need to determine the distribution of my data so that I can make transformations before proceeding to hypothesis testing.

In other words, when do I have to over mind about the normality of my data?

Its because after reading your ebook, I clearly noticed that normality is not a big issue I should pay attention to when my sample data is huge.

January 28, 2021 at 11:32 pm

Hi Collinz,

You’re quite right that many hypothesis tests don’t require the data to follow a normal distribution when you have a large enough sample. And, an important note, the sample size doesn’t have to be huge. Often you don’t need more than 15-20 observations per group to be able to waive the normality assumption. At any rate, on to answering your question!

There are other situations where knowing the distribution is crucial. Often these are situations where you want to determine probabilities of outcomes falling within particular ranges. For example, capability analysis determines a process’s capability of producing parts that fall within the spec limits. Or, perhaps you want to calculate percentiles for you data using the probability distribution function. In these cases, you need to know which distribution best fits your data. In fact, it’ll often be obvious that the data don’t follow the normal distribution (as with the data in this example) and then the next step becomes determining which distribution your data follow.

Thanks for the great question! And, I hope that helps clarify it.

' src=

December 15, 2020 at 1:04 pm

Hi Jim i have a question Why do we need other contionous distributions if everything just converge to normal why we need to define other distributions

December 15, 2020 at 2:46 pm

Hi Eli/Asya,

Continuous distributions don’t necessarily converge to normality. As I describe in this post, some continuous distributions are naturally nonnormal. Gathering larger and larger samples for these inherently nnnnormal distributions won’t produce a normal distribution.

I think you’re referring to the central limit theorem. This theorem states that sampling distributions of the mean will approximate the normal distribution even when the population distribution is not normal. The fact that this occurs is very helpful in allowing you use to use some hypothesis tests even when distribution of values is not normal. For more information, read my post about the central limit theorem .

However, sometimes you need to understand the properties of the distribution of values and not the sampling distribution, which are very different things. Consequently, there are occasions when you need to identify the distribution of your data!

' src=

September 23, 2020 at 10:01 am

Thanks Jim for wonderful article . I am new to DS field am trying to find ways to proceed on a project that I am working on . I have a dataset say X which is actually the number of hits our website receives , captured everyhour ( shall we call it independent variable ?) . I also have Y1,Y2 which are the dependent variables . Here Y1 is the CPU utilization , Y2 is the Memory utilization of our servers . My objective is to calculate the expected CPU , Memory utilization of , say next month in relation to the volume , X , we receive . When I plot X ( I am unable paste the picture here ) it shows a proper daily and weekly seasonality . In a day the graph rises to a max peak around 11 am and drips down , and again reaches another peak around 2 pm . So its kind of a two bell curves in a day . This pattern repeats day after day …Also the curves are similar on a weekday and weekends . Now I used fbprophet to do the forecast of X using past values of X .

Also the Y1 the CPU values make similar patterns – I am able to plot Y1 also and forecast using fbprophet . However I am in a situation where is I need to find out the exact correlation between X and Y1 and Y2 . Also the correlation between Y1 & Y2 itself and combinations there of these 3 … I tried add_aggressor() method of fbprophet to influence the forecast of Y1 and Y2 . The resulting forecast values are much closer to the actuals ( training data ) . However I am not convinced with this approach . I need to mathematically derive the correlation between X and Y1 , X and Y2 , Y1 and Y2,, X and Y1 and Y2 . I checked pearson correlation the number is positive 0.025 between X and Y1 . I tried ANOVA with excel it shows negative -1.025 ( it says CPU is inversely correlated to Volume ..) this is unbelievable because I expect a positive correlation only between X and Y1 …I did Granger casuality and it says X preceds Y1 which means my hypothesis that “Volume contributes to CPU ” is true … I am wondering how I can use a kind of moment generating function to exactly forecast or calculate values of Y1 , Y2 WITHOUT using forecasting models like ARIMA etc . I need to be able to calculate , with least error margin , the value of Y1 , Y2 given value of X …. Please advise me the best approach I need to take . Thanks in advance DB [email protected]

' src=

September 14, 2020 at 4:49 pm

I have been stuck on a very important project for a long time knowing in the back of my mind if I just could know what type of distribution a data set i have came from, I could make leaps and bounds worth of progress as a result. I’m so glad I finally googled this problem directly and came upon this article.

I can’t stress enough how valuable this blogpost is to what I’m working on. Thank you so much Jim.

' src=

September 3, 2020 at 4:04 am

thank you for your input. I am wondering what does it mean if I have a distribution where mean and standard deviation are really close together. Is this indication of something? The data is exponentially distributed.

' src=

August 25, 2020 at 5:30 pm

Thank you very much for your post! It helped me and a lot of other people out a lot!

' src=

August 21, 2020 at 8:17 am

It definitely helped! I appreciate your detailed answer. Through it and the links provided I even managed to work out a couple follow-up questions i was ruminating!

Since i started meddling with statistics i’ve been under the impression that the hard part is to develop the mindset to appropriately understand the results… without it one tend to just “believe” in the numbers. I thank you kindly for helping me understand.

Keep up the good work!

August 21, 2020 at 11:30 am

I’m so glad to hear that! It’s easy to just go with the numbers. You learning how it all works is fantastic. I always think subject-area knowledge plus enough statistical knowledge to understand what the numbers are telling you, plus their limitations, is a crucial combination! Always glad to help!

August 20, 2020 at 11:54 am

I appreciate your detailed response, and the links provided allowed me to work out a couple of follow up questions!

Since i started meddling with statistics and (theoretically) learned to use the tools i required, i felt it takes time and practice to get the mindset needed to properly understand statistic results… and by default one tends to “believe” the numbers instead of understanding them! Thank you kindly for the attention.

August 19, 2020 at 4:09 pm

Thank you very much for your blog. Since I found it i know where to search if i’m in dire need of statistical enlightenment!

I just noticed this article and left me wondering… If the best fit distribution is chosen based on the one that has higher p-value, doesn’t it mean we’re accepting the null hypothesis? This aspect of the goodness of fit tests always puzzled me.

I’ve skimmed through the comments and you address this somehow, indicating that, technically, with high p-values “your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution”. If we accept the distribution with the highest p-value as the best fit distribution but formally speaking we shouldn’t accept the null hypotesis, how strong is then the evidence given by Goodness of fit tests?

Thanks again, and sorry for the long question

August 19, 2020 at 11:26 pm

That is a great question. As I mention, this is an unusual case where we look for higher p-values. However, it’s important to note that a high p-value is just one factor. You still need to incorporate your subject area knowledge. Notice that in this post, I don’t go with the distribution that has the highest p-value (3-parameter Weibull p > 0.500). Instead I go with the lognormal distribution, which has a high (0.345) p-value but not the highest. As I discuss near the end, I use subject area knowledge to choose between the two. So, it’s not just the p-value.

Also, bear in mind that you’re looking at a range of distribution tests. Presumably some of those test will reject the null hypothesis and help rule out some distributions. Notice in this example (which uses real data) that low p-values rule out many distributions, which helps greatly in narrowing down the possibilities. Consequently, we’re not only picking by high p-values, we’re also using low p-values to rule out possibilities.

Also consider that statistical power is important. For hypothesis tests in general, when you have a small sample size, your statistical power is lower, which means it is easier to obtain high p-values. Normally, that is good because it protects you from jumping to conclusions based on larger sampling error that tends to happen in smaller samples. I write about the protective function of high p-values . However, in this scenario with distribution tests, where you want high p-values, an underpowered study can lead you in the wrong direction. Do keep an eye out for small sample sizes. I point out in this post that small samples size can cause these distribution tests to fail to identify departures from a specific distribution. Using the probability plots can help you identify some cases where a small sample deviates from the distribution being tested but the p-value is not significant. I discuss that in this post, but really focus on it in my post about using normal probability plots to assess normality . While that can help in some cases, you should always strive to have a larger sample size. I start to worry when it’s smaller than 20.

' src=

July 24, 2020 at 3:02 am

Thanks a million for this wonderful article. Honest;y, I am also one of those not very comfortable with the distributions other than normal ones. I was working on some data, for which the distributions were so different than normal and we wanted to perform linear regression. So, to even apply transformations to get to a normal shape the first step was to identify the original distribution. Your article helped me learn something new and very important.

Thanks again for sharing !

Regards, Poonam

July 28, 2020 at 12:33 am

I’m happy to hear that it was helpful! Thanks for writing! 🙂

' src=

July 22, 2020 at 7:34 am

Hello Jim, Thank you so much for this brilliant article. I am looking forward to the use cases after knowing the underlying distribution, is the article up ? Thank you 🙂

July 22, 2020 at 10:18 pm

Thanks for the reminder! It’s not up yet but I do need to get around to writing it!

' src=

July 21, 2020 at 6:33 pm

This is a wonderful article for a student like myself; who is just beginning a statistics oriented career. I want to know how do I generate those 95% CI interval plots (%fat v/s Percent). Further, Im assuming that whenever any activity such needs to be done, we would have to start off with the frequency distribution and then transition to probability distribution, correct? And this probability distribution is same as pdf? Pls help me clear off my doubts.

June 28, 2020 at 3:22 am

Hello Jim, thank you 4 making statistics so easy to understand, am pleased to inform you that I have managed to buy all your 3 books and I hope they will be of much help to me… and if u could also write about how to report research work for publication.

' src=

June 4, 2020 at 2:48 pm

okay thank you so much, but I really got the concept from your explanation, it is very clear !

June 4, 2020 at 2:55 pm

Thanks, Hana! I’m so glad to hear that!

June 4, 2020 at 9:54 am

Thanks Jim for the interesting and useful article. Do you recommend any other alternative for Minitab, maybe R package or other free software?

June 4, 2020 at 1:35 pm

I don’t have a good sense for what other software, particularly free, would be best. I’m sure it’s doable in R though.

' src=

June 3, 2020 at 5:28 am

Hi Jim Thanks for this text I want to ask you How to find Goodness of fit test when the distribution not defaults in R ?

' src=

May 22, 2020 at 10:12 am

Thank you for this awesome article; it is very much helpful. Quick question here: what should I look for when comparing the distribution of one sample against the distribution of another sample?

The end goal is to ensure that they are similar, so I imagine I want to make sure that their means are the same (an ANOVA test) and that their variances are the same (F-Test).

May 22, 2020 at 12:53 pm

There’s a distinction between identifying the distribution of your data (Normal vs. Weibull, Lognormal, etc.) and estimating the properties of your distribution. Although, identifying the distribution does involve estimating the properties for each type of distribution.

The method you write would help you determine whether those two properties (mean and variances) are different. Just be mindful of the statistical power of these tests. If you have particularly small sample sizes, the tests won’t be sensitive enough to unequal means or variances. Failing to reject the null doesn’t necessarily prove they’re equal.

Additionally, testing the means using ANOVA assumes that the variances are equal unless you use Welch’s ANOVA. I write more about this in an article about Welch’s ANOVA versus the typical F-test ANOVA .

' src=

April 18, 2020 at 3:53 pm

Good question. Once you know the distribution of your data you can actually have a better prediction of uncertainties such as likelihood of occurrence of events and corresponding impacts. You can also make some meaningful decisions by setting categories.

' src=

April 13, 2020 at 1:07 pm

Thank you for the brilliant explanation. Please, what else can i do by simply knowing the distribution of my data ?

' src=

April 13, 2020 at 9:12 am

Hi Jim, Very good explanation. Thank you so much for your effort. I have downloaded Minitab software, but unfortunately I couldn’t find the goodness of fit tab. where I can find it? Kindly reply

March 17, 2020 at 7:48 am

please, what else can i do by simply knowing the distribution of my data ?

' src=

November 14, 2019 at 5:24 am

Hello Jim, your article is very clear and easy to understand for newbie in stats. I’m looking forward for the article that shows me what I can do by simply knowing the distribution of your data. Did you have already published it? If yes, can you send me the link ?

Thanks again,

' src=

October 5, 2019 at 3:48 pm

How I can understand p-value in distribution identification with goodness of fit test?

For an example, when p-value is 0.45 with normal distribution, it means a data point can 45% probability fitting normal distribution, is it right?

Thank you very much!

October 5, 2019 at 3:58 pm

When the p-value is less than your significance level, you reject the null hypothesis. That’s the general rule. In this case, the null hypothesis states that the data follow a specific distribution, such as the normal distribution. Consequently, if the p-value is greater than the significance level, you fail to reject the null hypothesis. Your data favor the notion that it follows the distribution you are assessing. In your case, the p-value of 0.45 indicates you can reasonably assume that your data follow the normal distribution.

As for the precise meaning of the p-value, it indicates the probability of obtaining your observed sample or more extreme if the null hypothesis is true. Your sample doesn’t perfectly follow the normal distribution. No sample follows it perfectly. There’s always some deviation. The deviation between the distribution of your sample and the normal distribution, and more extreme deviations, have a 45% chance of occurring if the null hypothesis is true (i.e., that the population distribution is normally distributed). In other words, your sample is not unusual if the population is normally distributed. Hence, our conclusion that your sample follows a normal distribution. Technically, we’d say that your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution.

P-values are commonly misinterpreted in the manner that you state. For more information, read my post about interpreting p-values correctly .

' src=

September 8, 2019 at 11:07 am

Thanks Jim! Unfortunately I can’t find the Minitab in the CRAN repository. Is there any other way to download the package. Is it available for the for R version 3.5.1?

September 8, 2019 at 7:38 pm

Minitab is an entirely separate statistical software package–like SPSS (but different). It’s not an R function. Sorry about the confusion!

September 6, 2019 at 11:36 am

Hi Jim, Thanks for this explanation! Is minitab a function or a package? I’m wondering how you performed the Goodness of fit for multiple distributions.

Many thanks, Hanna

September 6, 2019 at 2:23 pm

Minitab is a statistical software package. Performing the various goodness-of-fit tests all at once is definitely a convenience. However, you can certainly try them one at a time. I’m not sure how other packages handle that.

' src=

August 18, 2019 at 4:24 am

Hi Jim Hope you are in the best of your health. I had a query with regard the application part white modelling severity of an event; say Claim Sizes in an insurance company, which distribution would be an ideal choice. Gamma or Lognormal? As far as I could make sense out of it, lognormal is preferable for modelling while dealing with units whose unit size is very small, Eg. alpha particles emitted per minute. Am I on the right lines? Thanks a ton

August 19, 2019 at 2:20 pm

Hi Lakshay,

I don’t know enough about claim sizes to be able to say–that’s not my field of expertise. You’ll probably need to do some research and try fitting some distributions to your data to see which one fits the best. I show you how to do that in this blog post.

Many distributions can model very small units. It’s more the overall shape of the distribution that is the limiting factor. Lognormal distributions are particularly good at modeling skewed distributions. I show an example of a lognormal distribution in this post. However, other distributions can model skewed distributions, such as the Weibull distribution. So, it depends on the precise shape of the skewness.

In general, the Weibull distribution is a very flexible distribution that can fit a wide variety of shapes. That would be a good distribution to start with if I had to name just one (besides the normal distribution). However, you should assess other distributions. Even though the Weibull distribution is very flexible, it did not provide the best fit for my real world data that I show in this post.

I hope this helps!

' src=

July 18, 2019 at 5:45 pm

Is there any difference between a distribution (hypothesis) test and goodness-of-fit test ? Or are they the same thing ?

July 18, 2019 at 9:46 pm

They’re definitely related. However, goodness-of-fit is a broader term. It includes distribution tests but it also includes measures such as R-squared, which assesses how well a regression model fits the data.

A distribution test is a more specific term that applies to tests that determine how well a probability distribution fits sample data.

Distribution tests are a subset of goodness-of-fit tests.

June 28, 2019 at 9:26 pm

Excellent article and I found it very helpful. I opened the csv data file of body fat % in Excel and I found there was 92 separate data points. Could you please let me know if this data is discrete or continuous, if you don’t mind me asking ? Thank you.

July 1, 2019 at 12:20 am

The data are recorded as percentages. Therefore, they are continuous data.

' src=

May 23, 2019 at 8:29 am

Hi jim how are you , i really wish to thank you for your indefatigable efforts towards relating your publications to the world. It helps me so very much to prepared fully against the university education. Furthermore, i will like to have a blog copy of your work. thank you.

May 24, 2019 at 12:23 am

Thanks so much for writing. I really appreciate your kind words!

The good news is that I’ll be writing a series of ebooks that goes far beyond what I can cover in my blog posts. I’ve completed the first one, which is an ebook about regression analysis . I’ll be working on others. The next one up is an introduction to statistics.

' src=

March 5, 2019 at 12:45 am

Thank you so much for your detailef email. I really appriciate it.

March 5, 2019 at 12:11 am

Thanks for your detailed reply. I am using cross-sectional continuous data of inputs (11 variables) used in crop production system. Variabilities exist within the dataset due to different level of inputs consumptions in farming systems and in some cases some inputs are even zero. Should I go for any grouping of the data. If yes, what kind of grouping approach should I use. I am basically interested in the uncertainty analysis of inputs (fuel and chemicals consumptions) and sensitivity analysis of desired output and associated environmental impacts. It will be great if you can guide me. Thanks.

March 5, 2019 at 12:23 am

Given the very specific details of your data and goals for your study, I think you’ll need to discuss this with someone who can sit down with you and go over all of it and give it the time that your study deserves. There’s just not enough information for me to go on and I don’t have the time, unfortunately, to really look into it.

One thing I can say is that if you’re trying to link your inputs to changes in the output, consider using regression analysis. For regression analysis, you only need to worry about the distribution of your residuals rather than your inputs and outputs. Regression is all about linking changes in the inputs to changes in the output. Read my post about when to use regression analysis for more information. It sounds like that might be the goal of your analysis, but I’m not sure.

Best of luck with your study!

' src=

March 4, 2019 at 4:28 pm

Thanks for the reply. No I am trying to determine the distribution of my survival curve from a published analysis. I was able to identify the survival probabilities from the published graph. The Minitab program only allows for the importation of one column. The distribution looks like a Weibull distribution but the Minitab results showed a normal distribution had the highest P value which didn’t make sense.

March 4, 2019 at 4:42 pm

Ok, in your original comment you didn’t mention that you were using a published graph. I don’t fully understand what you’re trying to do, and it’s impossible for a concrete reply without the full details. However, below are some things to consider.

Analysts often need to use their process knowledge to help them determine which distribution is appropriate. Perhaps that’s a consideration here? I also don’t know how different the p-values are. Small differences are not meaningful. Additionally, in some cases, Weibull distributions can approximate a normal distribution. Consequently, there might be only a small difference between those distributions.

But, it’s really hard to say. There’s just not enough information.

March 4, 2019 at 2:43 pm

Thanks for your reply. Yes I on the basis of p-values only I am concluding that the data is not following any distribution. My sample size is 1366 and 11 variables. None is following normal distribution. I tried Box Cox transformation and checked normality again following p-value. After transformation, the data points of some variables largly follow the line but some data points deviate from the line either at the begging or at the end.

However some variabls, the data points largely follow the line even without transformation with some points deviations at the ends. Thanks.

March 4, 2019 at 3:26 pm

You have a particularly large sample size. Consequently, you might need to focus more on the probability plots rather than the p-values. I suggest you read the section in this post that is titled “Using Probability Plots to Identify the Distribution of Your Data.” It describes how the additional power that distribution tests have with large sample sizes can detect meaningless deviations from a distribution. In those cases, using probability plots might be a better approach.

After you look at the probability plots, if an untransformed distribution fits well, I’d use that, otherwise go with the transformed.

You didn’t mention what you’re using these data for but be aware that some hypothesis tests are robust to departures from the normal distribution.

March 4, 2019 at 1:12 pm

after reading your blog, I have tried Minitab to check the distribution of my data but it surprisingly it does not follow any of the listed probability distribution. Could you please help me how should I move forward. Thanks.

March 4, 2019 at 2:27 pm

Before I can attempt to answer your question, I need to ask you several questions about your data.

What type of data are you talking about? Did the Box-Cox transformation or Johnson transformation produce a good fit? What is your sample size? Are you primarily going by p-values? If so, do any of the probability plots look good? Good meaning that the data points largely following the line. There’s the informal fat pencil test where if you put a pencil over the line, do the data points stay within it.

February 23, 2019 at 6:34 pm

Jim I enjoyed this blog. I tried to determine the distribution and parameters of a survival curve by importing into minitab. Minitab only allows 1 parameter or column while the survival curve has time on x axis and y on the y. How does one find the type of curve and parameters of a survival curve.

March 4, 2019 at 4:09 pm

Sorry about the delay in replying!

If I’m understanding your question correctly, the answer is that creating a survival plot with a survival curve is not a part of the process for identifying your distribution in Minitab that I show in this blog post. However, you can find the proper analyses in the Reliability/Survival menu in Minitab. In that menu path, there are distribution analyses for failure data specifically.

Additionally, there are other analyses in the Reliability/Survival path including the following:

Stat > Reliability/Survival > Probit Analysis.

And, if you’re using accelerated testing: Stat > Reliability/Survival > Accelerated Life Testing.

February 11, 2019 at 5:11 pm

Hi Jim, in the next-to-last graph in your post (the distribution plots), you say the Weibull plot stops abruptly at the location value of 3.32. Yet it appears to stop at more like 13-ish. Did I misunderstand something, or is the graph incorrect? Also what is the ‘scale’ metric in Weibull plots? Thanks –

February 12, 2019 at 8:35 pm

Hi Jerry, the Weibull distribution actually stops at the threshold value of ~16. The threshold value shifts the distribution along the X-axis relative to zero. Consequently, a threshold of 16 indicates the distribution starts with the lowest value of 16. Without the threshold parameter, the Weibull distribution starts at zero.

The scale parameter is similar to a measure of dispersion. For a given shape, it indicates how spread out the values are.

Here’s a nice site that shows the effect of the shape, scale, and threshold parameters for the Weibull distribution .

' src=

November 28, 2018 at 12:44 pm

Hi, Jim. Thank you so much for your detailed reply. Does the Null Hypothesis in Minitab is the dsitribution follows the specific distribution? So larger p-value cannot reject the null hypothesis. Another question is about the correlation coefficient (PPCC value), does it can denote the goodness-of-fit of each dsitribution? Thanks.

November 28, 2018 at 2:14 pm

Hi Alice, you’re very welcome!

Yes, as I detail in this post, the null hypothesis states that the data follow the specific distribution. Consequently, a low p-value suggests that you should reject the null and conclude that the data do not follow that distribution. Reread this post for more information about that aspect of these distribution tests.

Unless Minitab has changed something that I’m unaware of, you do not need to worry about PPCC when interpreting the probability plots. Again, reread this post to learn how to interpret the probability plots. With such a large sample size, it will be more important for you to interpret the probability plots rather than the p-values.

If you want to learn about PPCC for other reasons, here’s a good source of information about it: Probability Plot Correlation Coefficient .

Best of luck with your analysis!

November 27, 2018 at 5:00 pm

Hi, Jim. Besides, I have tried the calculation with 1000 data, but the p-value is extremely small. Almost the p-value of all distributions are less than 0.005. Do you have any suggestions about this?

November 28, 2018 at 1:32 am

Yes, this is the issue that I described in my first response to you. With so many data points (even 1000 is a large dataset) these tests are very powerful. Trivial departures from the distribution will produce a low p-value. That’s why you’ll likely need to focus on the probability plots for each distribution.

November 27, 2018 at 4:10 pm

Jim, Thanks for your detailed reply. Actually, I have tried the probability plots, and several distributions perform almost the same. And I used Minitab to calculate the p-value, but the software said that it is out of stock. There is no results for p-value. Do you know how to deal with it? Is the data (1,000,000) too much for the p-value calculation? Thanks.

November 28, 2018 at 1:30 am

I’m not sure. I think it might be out of memory. You should contact their technical support to find out for sure. They’ll know. Their support is quite good. You’ll reach a real person very quickly who can help you.

To answer your question, no, it’s not possible to have too many data points to calculate the p-value mathematically. But, it’s possible that the program can’t handle such a large dataset. I’m not sure about that.

November 27, 2018 at 11:43 am

Hi, Jim. Thanks for your detailed explanation. Actually, I have no experience in Minitab. I have a large matrix (1000000*13), but I found that when I went to Stat > Quality Tools > Individual Distribution Identification in Minitab, It only can do the single column data analysis. And it seems that it takes much time for 1,000,000 data. Do you have any sugggestions about how to find the appropriate distribution for 13 columns with 1,000,000 data?

November 27, 2018 at 1:54 pm

Yes, that tool analyzes individual columns only. It assesses the distribution of a single variable. If you’re looking for some sort of multivariate distribution analysis, it won’t do that.

I think Minitab is good software, but it can struggle with extremely large datasets like yours.

One thing to be aware of is that with so many data points, the distribution tests become extremely powerful. They will be so powerful that they can detect trivial departures from a distribution. In other words, your data might follow a specific distribution, but the test is so powerful that it will reject the null hypothesis that it follows that distribution. For such a large dataset, pay particular attention to the probability plots! If those look good but the p-value is significant, go with the graph!

' src=

September 24, 2018 at 4:56 am

Hi Jim, very nicely explained. Thank you so much for your effort. Are your blogs are available in printable version?

September 24, 2018 at 9:58 am

Hi Rashmi, thank you so much for your kind words. I really appreciate them!

I’m working on ebooks that contain the blog material plus a lot more! The first one should be available early next year.

' src=

September 13, 2018 at 7:28 pm

Hello. I have a question. I have genome data has lots of zeros. I want to check the distribution of this data. It could be continuous or discrete. We can use ks.test, but this is for continuous distributions. Is there any way to check if the data follows a specific discrete distribution? Thank you

September 14, 2018 at 11:55 pm

Hi, there are distribution tests for discrete data. I’d start by reading my post about goodness-of-fit tests for discrete distributions and see if that helps.

' src=

September 10, 2018 at 9:45 am

Hi Jim, Great sharing. I have perform an identification of distribution of my nonnormal data, however none of the distribution have good fit to my data. All the p-value < 0.05. What are the different approach I can use to study the distribution model before I can perform capability analysis 🙂 Thanks for your help

' src=

February 7, 2018 at 5:42 am

Hi Jim! I’m trying to test the distribution of my data in SPSS and have used the One-Sample Kolmogorov-Smirnov Test which test for normal, uniform, poisson or exponential distribution. Non of them fit my data… How do I preoceed, I don’t know how to work in R or MiniTab, so do you know if there’s another test in SPSS I can do or do I have to learn a new program? I need to know the distribution to be able to choose the right model for the GLM test I’m gonna do.

February 7, 2018 at 2:38 pm

Hi Alice! Unfortunately, I haven’t used SPSS in quite some time and I’m not familiar with its distribution testing capabilities. The one additional distribution that I’d check is the Weibull distribution. That is a particularly flexible distribution that can fit many different shapes–but I don’t see that in your list.

' src=

January 22, 2018 at 2:30 pm

Very good explanation!!

January 22, 2018 at 2:33 pm

Thank you, Maria!

' src=

September 5, 2017 at 12:16 am

Great explanation, thanks Jim.

September 5, 2017 at 12:55 am

Thank you, Olayan!

' src=

May 1, 2017 at 1:32 am

Thanks Jim. I am going to try the same implementation using stata and/or R.

May 1, 2017 at 1:54 am

You’re very welcome Wilbrod! Best of luck with your analysis!

' src=

April 26, 2017 at 2:36 pm

Great article! 🙂

What software are you using to evaluate the distribution of the data?

April 26, 2017 at 3:18 pm

Hi Charles, thanks and I’m glad you found it helpful! I’m using Minitab statistical software.

' src=

April 26, 2017 at 1:06 pm

Hello Jim, what kind of statistics software do you use?

April 26, 2017 at 2:45 pm

Hi Ricardo, I’m using Minitab statistical software.

' src=

April 26, 2017 at 10:48 pm

what a fantastic example Jim!

April 26, 2017 at 11:04 pm

Thank you so much, Muhammad!

Comments and Questions Cancel reply

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.7 Hypothesis Tests for a Population Mean with Unknown Population Standard Deviation

Learning objectives.

  • Conduct and interpret hypothesis tests for a population mean with unknown population standard deviation.

Some notes about conducting a hypothesis test:

  • The null hypothesis [latex]H_0[/latex] is always an “equal to.”  The null hypothesis is the original claim about the population parameter.
  • The alternative hypothesis [latex]H_a[/latex] is a “less than,” “greater than,” or “not equal to.”  The form of the alternative hypothesis depends on the context of the question.
  • If the alternative hypothesis is a “less than”,  then the test is left-tail.  The p -value is the area in the left-tail of the distribution.
  • If the alternative hypothesis is a “greater than”, then the test is right-tail.  The p -value is the area in the right-tail of the distribution.
  • If the alternative hypothesis is a “not equal to”, then the test is two-tail.  The p -value is the sum of the area in the two-tails of the distribution.  Each tail represents exactly half of the p -value.
  • Think about the meaning of the p -value.  A data analyst (and anyone else) should have more confidence that they made the correct decision to reject the null hypothesis with a smaller p -value (for example, 0.001 as opposed to 0.04) even if using a significance level of  0.05.  Similarly, for a large p -value such as 0.4, as opposed to a p -value of 0.056 (a significance level of 0.05 is less than either number), a data analyst should have more confidence that they made the correct decision in not rejecting the null hypothesis.  This makes the data analyst use judgment rather than mindlessly applying rules.
  • The significance level must be identified before collecting the sample data and conducting the test.  Generally, the significance level will be included in the question.  If no significance level is given, a common standard is to use a significance level of 5%.
  • An alternative approach for hypothesis testing is to use what is called the critical value approach .  In this book, we will only use the p -value approach.  Some of the videos below may mention the critical value approach, but this approach will not be used in this book.

Steps to Conduct a Hypothesis Test for a Population Mean with Unknown Population Standard Deviation

  • Write down the null and alternative hypotheses in terms of the population mean [latex]\mu[/latex].  Include appropriate units with the values of the mean.
  • Use the form of the alternative hypothesis to determine if the test is left-tailed, right-tailed, or two-tailed.
  • Collect the sample information for the test and identify the significance level [latex]\alpha[/latex].

[latex]\begin{eqnarray*} t & = & \frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}} \\ \\ df & = & n-1 \\ \\ \end{eqnarray*}[/latex]

  • The results of the sample data are significant. There is sufficient evidence to conclude that the null hypothesis [latex]H_0[/latex] is an incorrect belief and that the alternative hypothesis [latex]H_a[/latex] is most likely correct.
  • The results of the sample data are not significant. There is not sufficient evidence to conclude that the alternative hypothesis [latex]H_a[/latex] may be correct.
  • Write down a concluding sentence specific to the context of the question.

USING EXCEL TO CALCULE THE P -VALUE FOR A HYPOTHESIS TEST ON A POPULATION MEAN WITH UNKNOWN POPULATION STANDARD DEVIATION

The p -value for a hypothesis test on a population mean is the area in the tail(s) of the distribution of the sample mean.  When the population standard deviation is unknown, use the [latex]t[/latex]-distribution to find the p -value.

If the p -value is the area in the left-tail:

  • For t-score , enter the value of [latex]t[/latex] calculated from [latex]\displaystyle{t=\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}}[/latex].
  • For degrees of freedom , enter the degrees of freedom for the [latex]t[/latex]-distribution [latex]n-1[/latex].
  • For the logic operator , enter true .  Note:  Because we are calculating the area under the curve, we always enter true for the logic operator.
  • The output from the t.dist function is the area under the [latex]t[/latex]-distribution to the left of the entered [latex]t[/latex]-score.
  • Visit the Microsoft page for more information about the t.dist function.

If the p -value is the area in the right-tail:

  • The output from the t.dist.rt function is the area under the [latex]t[/latex]-distribution to the right of the entered [latex]t[/latex]-score.
  • Visit the Microsoft page for more information about the t.dist.rt function.

If the p -value is the sum of area in the tails:

  • For t-score , enter the absolute value of [latex]t[/latex] calculated from [latex]\displaystyle{t=\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}}[/latex].  Note:  In the t.dist.2t function, the value of the [latex]t[/latex]-score must be a positive number.  If the [latex]t[/latex]-score is negative, enter the absolute value of the [latex]t[/latex]-score into the t.dist.2t function.
  • The output from the t.dist.2t function is the sum of areas in the tails under the [latex]t[/latex]-distribution.
  • Visit the Microsoft page for more information about the t.dist.2t function.

Statistics students believe that the mean score on the first statistics test is 65.  A statistics instructor thinks the mean score is higher than 65.  He samples ten statistics students and obtains the following scores:

65 67 66 68 72
65 70 63 63 71

The instructor performs a hypothesis test using a 1% level of significance. The test scores are assumed to be from a normal distribution.

Hypotheses:

[latex]\begin{eqnarray*} H_0: & & \mu=65  \\ H_a: & & \mu \gt 65  \end{eqnarray*}[/latex]

From the question, we have [latex]n=10[/latex], [latex]\overline{x}=67[/latex], [latex]s=3.1972...[/latex] and [latex]\alpha=0.01[/latex].

This is a test on a population mean where the population standard deviation is unknown (we only know the sample standard deviation [latex]s=3.1972...[/latex]).  So we use a [latex]t[/latex]-distribution to calculate the p -value.  Because the alternative hypothesis is a [latex]\gt[/latex], the p -value is the area in the right-tail of the distribution.

This is a t-distribution curve. The peak of the curve is at 0 on the horizontal axis. The point t is also labeled. A vertical line extends from point t to the curve with the area to the right of this vertical line shaded. The p-value equals the area of this shaded region.

To use the t.dist.rt function, we need to calculate out the [latex]t[/latex]-score:

[latex]\begin{eqnarray*} t & = & \frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}} \\ & = & \frac{67-65}{\frac{3.1972...}{\sqrt{10}}} \\ & = & 1.9781... \end{eqnarray*}[/latex]

The degrees of freedom for the [latex]t[/latex]-distribution is [latex]n-1=10-1=9[/latex].

t.dist.rt
1.9781…. 0.0396
9

So the p -value[latex]=0.0396[/latex].

Conclusion:

Because p -value[latex]=0.0396 \gt 0.01=\alpha[/latex], we do not reject the null hypothesis.  At the 1% significance level there is not enough evidence to suggest that mean score on the test is greater than 65.

  • The null hypothesis [latex]\mu=65[/latex] is the claim that the mean test score is 65.
  • The alternative hypothesis [latex]\mu \gt 65[/latex] is the claim that the mean test score is greater than 65.
  • Keep all of the decimals throughout the calculation (i.e. in the sample standard deviation, the [latex]t[/latex]-score, etc.) to avoid any round-off error in the calculation of the p -value.  This ensures that we get the most accurate value for the p -value.
  • The p -value is the area in the right-tail of the [latex]t[/latex]-distribution, to the right of [latex]t=1.9781...[/latex].
  • The p -value of 0.0396 tells us that under the assumption that the mean test score is 65 (the null hypothesis), there is a 3.96% chance that the mean test score is 65 or more.  Compared to the 1% significance level, this is a large probability, and so is likely to happen assuming the null hypothesis is true.  This suggests that the assumption that the null hypothesis is true is most likely correct, and so the conclusion of the test is to not reject the null hypothesis.

A company claims that the average change in the value of their stock is $3.50 per week.  An investor believes this average is too high. The investor records the changes in the company’s stock price over 30 weeks and finds the average change in the stock price is $2.60 with a standard deviation of $1.80.  At the 5% significance level, is the average change in the company’s stock price lower than the company claims?

[latex]\begin{eqnarray*} H_0: & & \mu=$3.50  \\ H_a: & & \mu \lt $3.50  \end{eqnarray*}[/latex]

From the question, we have [latex]n=30[/latex], [latex]\overline{x}=2.6[/latex], [latex]s=1.8[/latex] and [latex]\alpha=0.05[/latex].

This is a test on a population mean where the population standard deviation is unknown (we only know the sample standard deviation [latex]s=1.8.[/latex]).  So we use a [latex]t[/latex]-distribution to calculate the p -value.  Because the alternative hypothesis is a [latex]\lt[/latex], the p -value is the area in the left-tail of the distribution.

his is a t-distribution curve. The peak of the curve is at 0 on the horizontal axis. The point t is also labeled. A vertical line extends from point t to the curve with the area to the left of this vertical line shaded. The p-value equals the area of this shaded region.

To use the t.dist function, we need to calculate out the [latex]t[/latex]-score:

[latex]\begin{eqnarray*} t & = & \frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}} \\ & = & \frac{2.6-3.5}{\frac{1.8}{\sqrt{30}}} \\ & = & -1.5699... \end{eqnarray*}[/latex]

The degrees of freedom for the [latex]t[/latex]-distribution is [latex]n-1=30-1=29[/latex].

t.dist
-1.5699…. 0.0636
29
true

So the p -value[latex]=0.0636[/latex].

Because p -value[latex]=0.0636 \gt 0.05=\alpha[/latex], we do not reject the null hypothesis.  At the 5% significance level there is not enough evidence to suggest that average change in the stock price is lower than $3.50.

  • The null hypothesis [latex]\mu=$3.50[/latex] is the claim that the average change in the company’s stock is $3.50 per week.
  • The alternative hypothesis [latex]\mu \lt $3.50[/latex] is the claim that the average change in the company’s stock is less than $3.50 per week.
  • The p -value is the area in the left-tail of the [latex]t[/latex]-distribution, to the left of [latex]t=-1.5699...[/latex].
  • The p -value of 0.0636 tells us that under the assumption that the average change in the stock is $3.50 (the null hypothesis), there is a 6.36% chance that the average change is $3.50 or less.  Compared to the 5% significance level, this is a large probability, and so is likely to happen assuming the null hypothesis is true.  This suggests that the assumption that the null hypothesis is true is most likely correct, and so the conclusion of the test is to not reject the null hypothesis.  In other words, the company’s claim that the average change in their stock price is $3.50 per week is most likely correct.

A paint manufacturer has their production line set-up so that the average volume of paint in a can is 3.78 liters.  The quality control manager at the plant believes that something has happened with the production and the average volume of paint in the cans has changed.  The quality control department takes a sample of 100 cans and finds the average volume is 3.62 liters with a standard deviation of 0.7 liters.  At the 5% significance level, has the volume of paint in a can changed?

[latex]\begin{eqnarray*} H_0: & & \mu=3.78 \mbox{ liters}  \\ H_a: & & \mu \neq 3.78 \mbox{ liters}  \end{eqnarray*}[/latex]

From the question, we have [latex]n=100[/latex], [latex]\overline{x}=3.62[/latex], [latex]s=0.7[/latex] and [latex]\alpha=0.05[/latex].

This is a test on a population mean where the population standard deviation is unknown (we only know the sample standard deviation [latex]s=0.7[/latex]).  So we use a [latex]t[/latex]-distribution to calculate the p -value.  Because the alternative hypothesis is a [latex]\neq[/latex], the p -value is the sum of area in the tails of the distribution.

This is a t distribution curve. The peak of the curve is at 0 on the horizontal axis. The point -t and t are also labeled. A vertical line extends from point t to the curve with the area to the right of this vertical line shaded with the shaded area labeled half of the p-value. A vertical line extends from -t to the curve with the area to the left of this vertical line shaded with the shaded area labeled half of the p-value. The p-value equals the area of these two shaded regions.

To use the t.dist.2t function, we need to calculate out the [latex]t[/latex]-score:

[latex]\begin{eqnarray*} t & = & \frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}} \\ & = & \frac{3.62-3.78}{\frac{0.07}{\sqrt{100}}} \\ & = & -2.2857... \end{eqnarray*}[/latex]

The degrees of freedom for the [latex]t[/latex]-distribution is [latex]n-1=100-1=99[/latex].

t.dist.2t
2.2857…. 0.0244
99

So the p -value[latex]=0.0244[/latex].

Because p -value[latex]=0.0244 \lt 0.05=\alpha[/latex], we reject the null hypothesis in favour of the alternative hypothesis.  At the 5% significance level there is enough evidence to suggest that average volume of paint in the cans has changed.

  • The null hypothesis [latex]\mu=3.78[/latex] is the claim that the average volume of paint in the cans is 3.78.
  • The alternative hypothesis [latex]\mu \neq 3.78[/latex] is the claim that the average volume of paint in the cans is not 3.78.
  • Keep all of the decimals throughout the calculation (i.e. in the [latex]t[/latex]-score) to avoid any round-off error in the calculation of the p -value.  This ensures that we get the most accurate value for the p -value.
  • The p -value is the sum of the area in the two tails.  The output from the t.dist.2t function is exactly the sum of the area in the two tails, and so is the p -value required for the test.  No additional calculations are required.
  • The t.dist.2t function requires that the value entered for the [latex]t[/latex]-score is positive .  A negative [latex]t[/latex]-score entered into the t.dist.2t function generates an error in Excel.  In this case, the value of the [latex]t[/latex]-score is negative, so we must enter the absolute value of this [latex]t[/latex]-score into field 1.
  • The p -value of 0.0244 is a small probability compared to the significance level, and so is unlikely to happen assuming the null hypothesis is true.  This suggests that the assumption that the null hypothesis is true is most likely incorrect, and so the conclusion of the test is to reject the null hypothesis in favour of the alternative hypothesis.  In other words, the average volume of paint in the cans has most likely changed from 3.78 liters.

Watch this video: Hypothesis Testing: t -test, right tail by ExcelIsFun [11:02]

Watch this video: Hypothesis Testing: t -test, left tail by ExcelIsFun [7:48]

Watch this video: Hypothesis Testing: t -test, two tail by ExcelIsFun [8:54]

Concept Review

The hypothesis test for a population mean is a well established process:

  • Collect the sample information for the test and identify the significance level.
  • When the population standard deviation is unknown, find the p -value (the area in the corresponding tail) for the test using the [latex]t[/latex]-distribution with [latex]\displaystyle{t=\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}}[/latex] and [latex]df=n-1[/latex].
  • Compare the p -value to the significance level and state the outcome of the test.

Attribution

“ 9.6   Hypothesis Testing of a Single Mean and Single Proportion “ in Introductory Statistics by OpenStax  is licensed under a  Creative Commons Attribution 4.0 International License.

Introduction to Statistics Copyright © 2022 by Valerie Watts is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Hypothesis Testing

Hypothesis testing is a tool for making statistical inferences about the population data. It is an analysis tool that tests assumptions and determines how likely something is within a given standard of accuracy. Hypothesis testing provides a way to verify whether the results of an experiment are valid.

A null hypothesis and an alternative hypothesis are set up before performing the hypothesis testing. This helps to arrive at a conclusion regarding the sample obtained from the population. In this article, we will learn more about hypothesis testing, its types, steps to perform the testing, and associated examples.

1.
2.
3.
4.
5.
6.
7.
8.

What is Hypothesis Testing in Statistics?

Hypothesis testing uses sample data from the population to draw useful conclusions regarding the population probability distribution . It tests an assumption made about the data using different types of hypothesis testing methodologies. The hypothesis testing results in either rejecting or not rejecting the null hypothesis.

Hypothesis Testing Definition

Hypothesis testing can be defined as a statistical tool that is used to identify if the results of an experiment are meaningful or not. It involves setting up a null hypothesis and an alternative hypothesis. These two hypotheses will always be mutually exclusive. This means that if the null hypothesis is true then the alternative hypothesis is false and vice versa. An example of hypothesis testing is setting up a test to check if a new medicine works on a disease in a more efficient manner.

Null Hypothesis

The null hypothesis is a concise mathematical statement that is used to indicate that there is no difference between two possibilities. In other words, there is no difference between certain characteristics of data. This hypothesis assumes that the outcomes of an experiment are based on chance alone. It is denoted as \(H_{0}\). Hypothesis testing is used to conclude if the null hypothesis can be rejected or not. Suppose an experiment is conducted to check if girls are shorter than boys at the age of 5. The null hypothesis will say that they are the same height.

Alternative Hypothesis

The alternative hypothesis is an alternative to the null hypothesis. It is used to show that the observations of an experiment are due to some real effect. It indicates that there is a statistical significance between two possible outcomes and can be denoted as \(H_{1}\) or \(H_{a}\). For the above-mentioned example, the alternative hypothesis would be that girls are shorter than boys at the age of 5.

Hypothesis Testing P Value

In hypothesis testing, the p value is used to indicate whether the results obtained after conducting a test are statistically significant or not. It also indicates the probability of making an error in rejecting or not rejecting the null hypothesis.This value is always a number between 0 and 1. The p value is compared to an alpha level, \(\alpha\) or significance level. The alpha level can be defined as the acceptable risk of incorrectly rejecting the null hypothesis. The alpha level is usually chosen between 1% to 5%.

Hypothesis Testing Critical region

All sets of values that lead to rejecting the null hypothesis lie in the critical region. Furthermore, the value that separates the critical region from the non-critical region is known as the critical value.

Hypothesis Testing Formula

Depending upon the type of data available and the size, different types of hypothesis testing are used to determine whether the null hypothesis can be rejected or not. The hypothesis testing formula for some important test statistics are given below:

  • z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\). \(\overline{x}\) is the sample mean, \(\mu\) is the population mean, \(\sigma\) is the population standard deviation and n is the size of the sample.
  • t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\). s is the sample standard deviation.
  • \(\chi ^{2} = \sum \frac{(O_{i}-E_{i})^{2}}{E_{i}}\). \(O_{i}\) is the observed value and \(E_{i}\) is the expected value.

We will learn more about these test statistics in the upcoming section.

Types of Hypothesis Testing

Selecting the correct test for performing hypothesis testing can be confusing. These tests are used to determine a test statistic on the basis of which the null hypothesis can either be rejected or not rejected. Some of the important tests used for hypothesis testing are given below.

Hypothesis Testing Z Test

A z test is a way of hypothesis testing that is used for a large sample size (n ≥ 30). It is used to determine whether there is a difference between the population mean and the sample mean when the population standard deviation is known. It can also be used to compare the mean of two samples. It is used to compute the z test statistic. The formulas are given as follows:

  • One sample: z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).
  • Two samples: z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing t Test

The t test is another method of hypothesis testing that is used for a small sample size (n < 30). It is also used to compare the sample mean and population mean. However, the population standard deviation is not known. Instead, the sample standard deviation is known. The mean of two samples can also be compared using the t test.

  • One sample: t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\).
  • Two samples: t = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing Chi Square

The Chi square test is a hypothesis testing method that is used to check whether the variables in a population are independent or not. It is used when the test statistic is chi-squared distributed.

One Tailed Hypothesis Testing

One tailed hypothesis testing is done when the rejection region is only in one direction. It can also be known as directional hypothesis testing because the effects can be tested in one direction only. This type of testing is further classified into the right tailed test and left tailed test.

Right Tailed Hypothesis Testing

The right tail test is also known as the upper tail test. This test is used to check whether the population parameter is greater than some value. The null and alternative hypotheses for this test are given as follows:

\(H_{0}\): The population parameter is ≤ some value

\(H_{1}\): The population parameter is > some value.

If the test statistic has a greater value than the critical value then the null hypothesis is rejected

Right Tail Hypothesis Testing

Left Tailed Hypothesis Testing

The left tail test is also known as the lower tail test. It is used to check whether the population parameter is less than some value. The hypotheses for this hypothesis testing can be written as follows:

\(H_{0}\): The population parameter is ≥ some value

\(H_{1}\): The population parameter is < some value.

The null hypothesis is rejected if the test statistic has a value lesser than the critical value.

Left Tail Hypothesis Testing

Two Tailed Hypothesis Testing

In this hypothesis testing method, the critical region lies on both sides of the sampling distribution. It is also known as a non - directional hypothesis testing method. The two-tailed test is used when it needs to be determined if the population parameter is assumed to be different than some value. The hypotheses can be set up as follows:

\(H_{0}\): the population parameter = some value

\(H_{1}\): the population parameter ≠ some value

The null hypothesis is rejected if the test statistic has a value that is not equal to the critical value.

Two Tail Hypothesis Testing

Hypothesis Testing Steps

Hypothesis testing can be easily performed in five simple steps. The most important step is to correctly set up the hypotheses and identify the right method for hypothesis testing. The basic steps to perform hypothesis testing are as follows:

  • Step 1: Set up the null hypothesis by correctly identifying whether it is the left-tailed, right-tailed, or two-tailed hypothesis testing.
  • Step 2: Set up the alternative hypothesis.
  • Step 3: Choose the correct significance level, \(\alpha\), and find the critical value.
  • Step 4: Calculate the correct test statistic (z, t or \(\chi\)) and p-value.
  • Step 5: Compare the test statistic with the critical value or compare the p-value with \(\alpha\) to arrive at a conclusion. In other words, decide if the null hypothesis is to be rejected or not.

Hypothesis Testing Example

The best way to solve a problem on hypothesis testing is by applying the 5 steps mentioned in the previous section. Suppose a researcher claims that the mean average weight of men is greater than 100kgs with a standard deviation of 15kgs. 30 men are chosen with an average weight of 112.5 Kgs. Using hypothesis testing, check if there is enough evidence to support the researcher's claim. The confidence interval is given as 95%.

Step 1: This is an example of a right-tailed test. Set up the null hypothesis as \(H_{0}\): \(\mu\) = 100.

Step 2: The alternative hypothesis is given by \(H_{1}\): \(\mu\) > 100.

Step 3: As this is a one-tailed test, \(\alpha\) = 100% - 95% = 5%. This can be used to determine the critical value.

1 - \(\alpha\) = 1 - 0.05 = 0.95

0.95 gives the required area under the curve. Now using a normal distribution table, the area 0.95 is at z = 1.645. A similar process can be followed for a t-test. The only additional requirement is to calculate the degrees of freedom given by n - 1.

Step 4: Calculate the z test statistic. This is because the sample size is 30. Furthermore, the sample and population means are known along with the standard deviation.

z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).

\(\mu\) = 100, \(\overline{x}\) = 112.5, n = 30, \(\sigma\) = 15

z = \(\frac{112.5-100}{\frac{15}{\sqrt{30}}}\) = 4.56

Step 5: Conclusion. As 4.56 > 1.645 thus, the null hypothesis can be rejected.

Hypothesis Testing and Confidence Intervals

Confidence intervals form an important part of hypothesis testing. This is because the alpha level can be determined from a given confidence interval. Suppose a confidence interval is given as 95%. Subtract the confidence interval from 100%. This gives 100 - 95 = 5% or 0.05. This is the alpha value of a one-tailed hypothesis testing. To obtain the alpha value for a two-tailed hypothesis testing, divide this value by 2. This gives 0.05 / 2 = 0.025.

Related Articles:

  • Probability and Statistics
  • Data Handling

Important Notes on Hypothesis Testing

  • Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant.
  • It involves the setting up of a null hypothesis and an alternate hypothesis.
  • There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
  • Hypothesis testing can be classified as right tail, left tail, and two tail tests.

Examples on Hypothesis Testing

  • Example 1: The average weight of a dumbbell in a gym is 90lbs. However, a physical trainer believes that the average weight might be higher. A random sample of 5 dumbbells with an average weight of 110lbs and a standard deviation of 18lbs. Using hypothesis testing check if the physical trainer's claim can be supported for a 95% confidence level. Solution: As the sample size is lesser than 30, the t-test is used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) > 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 5, s = 18. \(\alpha\) = 0.05 Using the t-distribution table, the critical value is 2.132 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = 2.484 As 2.484 > 2.132, the null hypothesis is rejected. Answer: The average weight of the dumbbells may be greater than 90lbs
  • Example 2: The average score on a test is 80 with a standard deviation of 10. With a new teaching curriculum introduced it is believed that this score will change. On random testing, the score of 38 students, the mean was found to be 88. With a 0.05 significance level, is there any evidence to support this claim? Solution: This is an example of two-tail hypothesis testing. The z test will be used. \(H_{0}\): \(\mu\) = 80, \(H_{1}\): \(\mu\) ≠ 80 \(\overline{x}\) = 88, \(\mu\) = 80, n = 36, \(\sigma\) = 10. \(\alpha\) = 0.05 / 2 = 0.025 The critical value using the normal distribution table is 1.96 z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) z = \(\frac{88-80}{\frac{10}{\sqrt{36}}}\) = 4.8 As 4.8 > 1.96, the null hypothesis is rejected. Answer: There is a difference in the scores after the new curriculum was introduced.
  • Example 3: The average score of a class is 90. However, a teacher believes that the average score might be lower. The scores of 6 students were randomly measured. The mean was 82 with a standard deviation of 18. With a 0.05 significance level use hypothesis testing to check if this claim is true. Solution: The t test will be used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) < 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 6, s = 18 The critical value from the t table is -2.015 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = \(\frac{82-90}{\frac{18}{\sqrt{6}}}\) t = -1.088 As -1.088 > -2.015, we fail to reject the null hypothesis. Answer: There is not enough evidence to support the claim.

go to slide go to slide go to slide

hypothesis testing known distribution

Book a Free Trial Class

FAQs on Hypothesis Testing

What is hypothesis testing.

Hypothesis testing in statistics is a tool that is used to make inferences about the population data. It is also used to check if the results of an experiment are valid.

What is the z Test in Hypothesis Testing?

The z test in hypothesis testing is used to find the z test statistic for normally distributed data . The z test is used when the standard deviation of the population is known and the sample size is greater than or equal to 30.

What is the t Test in Hypothesis Testing?

The t test in hypothesis testing is used when the data follows a student t distribution . It is used when the sample size is less than 30 and standard deviation of the population is not known.

What is the formula for z test in Hypothesis Testing?

The formula for a one sample z test in hypothesis testing is z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) and for two samples is z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

What is the p Value in Hypothesis Testing?

The p value helps to determine if the test results are statistically significant or not. In hypothesis testing, the null hypothesis can either be rejected or not rejected based on the comparison between the p value and the alpha level.

What is One Tail Hypothesis Testing?

When the rejection region is only on one side of the distribution curve then it is known as one tail hypothesis testing. The right tail test and the left tail test are two types of directional hypothesis testing.

What is the Alpha Level in Two Tail Hypothesis Testing?

To get the alpha level in a two tail hypothesis testing divide \(\alpha\) by 2. This is done as there are two rejection regions in the curve.

IMAGES

  1. Everything You Need To Know about Hypothesis Testing

    hypothesis testing known distribution

  2. PPT

    hypothesis testing known distribution

  3. Hypothesis Testing

    hypothesis testing known distribution

  4. Hypothesis Testing

    hypothesis testing known distribution

  5. Introduction to Hypothesis Testing in R

    hypothesis testing known distribution

  6. 13.Hypothesis testing for single Population mean video 2

    hypothesis testing known distribution

COMMENTS

  1. 9.4: Distribution Needed for Hypothesis Testing

    If you are testing a single population proportion, the distribution for the test is for proportions or percentages: P ′ ∼ N(p, √p − q n) The population parameter is p. The estimated value (point estimate) for p is p′. p ′ = x n where x is the number of successes and n is the sample size.

  2. 9.2: Hypothesis Testing

    Distribution Needed for Hypothesis Testing. Earlier in the course, we discussed sampling distributions. ... You know the value of the population standard deviation which, in reality, is rarely known. When you perform a hypothesis test of a single population proportion \(p\), you take a simple random sample from the population. You must meet the ...

  3. 9.3 Distribution Needed for Hypothesis Testing

    Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student's t-distribution. (Remember, use a Student's t-distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.)

  4. 9.3 Probability Distribution Needed for Hypothesis Testing

    Assumptions. When you perform a hypothesis test of a single population mean μ using a normal distribution (often called a z-test), you take a simple random sample from the population. The population you are testing is normally distributed, or your sample size is sufficiently large.You know the value of the population standard deviation, which, in reality, is rarely known.

  5. Hypothesis Testing

    One sample z-test . Assume data are independently sampled from a normal distribution with unknown mean μ and known variance σ 2 = 9. Make an initial assumption that μ = 65. Specify the hypothesis: H 0: μ = 65 H A: μ ≠ 65. z-statistic: 3.58. z-statistic follow N(0,1) distribution

  6. Distribution Needed for Hypothesis Testing

    We perform tests of a population proportion using a normal distribution (usually n is large or the sample size is large). If you are testing a single population mean, the distribution for the test is for means: ¯¯¯¯¯X~N (μX , σX √n) or tdf X ¯ ~ N ( μ X , σ X n) or t d f. The population parameter is μ μ. The estimated value (point ...

  7. Section 2: Hypothesis Testing

    We'll attempt to answer such questions using a statistical method known as hypothesis testing. We'll derive good hypothesis tests for the usual population parameters, including: a population mean \(\mu\) ... a hypothesis test based on the \(t\)-distribution, known as the pooled two-sample \(t\)-test, for \(\mu_1-\mu_2\) when the (unknown ...

  8. Distribution Needed for Hypothesis Testing

    normal distribution (often called a z -test), you take a simple random sample from the population. The population you are testing is normally distributed or your sample size is sufficiently large. You know the value of the population standard deviation which, in reality, is rarely known. When you perform a hypothesis test of a single population ...

  9. Understanding Hypothesis Testing

    The selection of which statistical test to use your hypothesis testing depends on several factors, such as — the distribution of the sample (whether it is normally distributed (follows the normal distribution)), what the sample size is, whether the variance is known, the type of data that you have, amongst some other things.

  10. Hypothesis Testing

    Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test. Step 4: Decide whether to reject or fail to reject your null hypothesis. Step 5: Present your findings. Other interesting articles. Frequently asked questions about hypothesis testing.

  11. Statistical Hypothesis Testing Overview

    Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables. This post provides an overview of statistical hypothesis testing.

  12. 8.1.3: Distribution Needed for Hypothesis Testing

    If you are testing a single population mean, the distribution for the test is for means: ˉX ∼ N(μx, σx √n) or. tdf. The population parameter is μ. The estimated value (point estimate) for μ is ˉx, the sample mean. If you are testing a single population proportion, the distribution for the test is for proportions or percentages:

  13. Hypothesis Testing Calculator with Steps

    Hypothesis Testing Calculator. The first step in hypothesis testing is to calculate the test statistic. The formula for the test statistic depends on whether the population standard deviation (σ) is known or unknown. If σ is known, our hypothesis test is known as a z test and we use the z distribution. If σ is unknown, our hypothesis test is ...

  14. How to Identify the Distribution of Your Data

    Like any statistical hypothesis test, distribution tests have a null hypothesis and an alternative hypothesis. H 0: ... The threshold parameter is also known as the location parameter. This parameter shifts the entire distribution left and right along the x-axis. The threshold/location parameter defines the smallest possible value in the ...

  15. 8.7 Hypothesis Tests for a Population Mean with Unknown Population

    The p-value for a hypothesis test on a population mean is the area in the tail(s) of the distribution of the sample mean. When the population standard deviation is unknown, use the [latex]t[/latex]-distribution to find the p-value.. If the p-value is the area in the left-tail: Use the t.dist function to find the p-value. In the t.dist(t-score, degrees of freedom, logic operator) function:

  16. 9.1: Introduction to Hypothesis Testing

    Hypothesis testing is a very general concept, but an important special class occurs when the distribution of the data variable X depends on a parameter θ taking values in a parameter space Θ. The parameter may be vector-valued, so that θ = (θ1, θ2, …, θn) and Θ ⊆ Rk for some k ∈ N +.

  17. Statistical hypothesis test

    A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently supports a particular hypothesis. ... For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance.

  18. PDF Harold's Statistics Hypothesis Testing Cheat Sheet

    Hypothesis A premise or claim that we want to test. Null Hypothesis: H 0 Currently accepted value for a parameter (middle of the distribution). Is assumed true for the purpose of carrying out the hypothesis test. • Always contains an "=" {=, , } • The null value implies a specific sampling distribution for the test statistic • H 0

  19. 3.1: The Fundamentals of Hypothesis Testing

    Components of a Formal Hypothesis Test. The null hypothesis is a statement about the value of a population parameter, such as the population mean (µ) or the population proportion (p).It contains the condition of equality and is denoted as H 0 (H-naught).. H 0: µ = 157 or H0 : p = 0.37. The alternative hypothesis is the claim to be tested, the opposite of the null hypothesis.

  20. Hypothesis Testing

    Now using a normal distribution table, the area 0.95 is at z = 1.645. A similar process can be followed for a t-test. The only additional requirement is to calculate the degrees of freedom given by n - 1. ... When the rejection region is only on one side of the distribution curve then it is known as one tail hypothesis testing. The right tail ...

  21. 10.29: Hypothesis Test for a Difference in Two Population Means (1 of 2)

    Step 3: Assess the evidence. If the conditions are met, then we calculate the t-test statistic. The t-test statistic has a familiar form. Since the null hypothesis assumes there is no difference in the population means, the expression (μ 1 - μ 2) is always zero.. As we learned in "Estimating a Population Mean," the t-distribution depends on the degrees of freedom (df).

  22. 8.6: Hypothesis Test of a Single Population Mean with Examples

    Steps for performing Hypothesis Test of a Single Population Mean. Step 1: State your hypotheses about the population mean. Step 2: Summarize the data. State a significance level. State and check conditions required for the procedure. Find or identify the sample size, n, the sample mean, ˉx. x ¯.

  23. 8: Introduction to Hypothesis Testing

    8.E: Introduction to Hypothesis Testing (Exercises) This page titled 8: Introduction to Hypothesis Testing is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Foster et al. ( University of Missouri's Affordable and Open Access Educational Resources Initiative ) via source content that was edited to the style ...

  24. 10.26: Hypothesis Test for a Population Mean (5 of 5)

    The mean pregnancy length is 266 days. We test the following hypotheses. H 0: μ = 266. H a: μ < 266. Suppose a random sample of 40 women who smoke during their pregnancy have a mean pregnancy length of 260 days with a standard deviation of 21 days. The P-value is 0.04.