greater than (>) less than (<)
H 0 always has a symbol with an equal in it. H a never has a symbol with an equal in it. The choice of symbol depends on the wording of the hypothesis test. However, be aware that many researchers (including one of the co-authors in research work) use = in the null hypothesis, even with > or < as the symbol in the alternative hypothesis. This practice is acceptable because we only make the decision to reject or not reject the null hypothesis.
H 0 : No more than 30% of the registered voters in Santa Clara County voted in the primary election. p ≤ 30
H a : More than 30% of the registered voters in Santa Clara County voted in the primary election. p > 30
A medical trial is conducted to test whether or not a new medicine reduces cholesterol by 25%. State the null and alternative hypotheses.
H 0 : The drug reduces cholesterol by 25%. p = 0.25
H a : The drug does not reduce cholesterol by 25%. p ≠ 0.25
We want to test whether the mean GPA of students in American colleges is different from 2.0 (out of 4.0). The null and alternative hypotheses are:
H 0 : μ = 2.0
H a : μ ≠ 2.0
We want to test whether the mean height of eighth graders is 66 inches. State the null and alternative hypotheses. Fill in the correct symbol (=, ≠, ≥, <, ≤, >) for the null and alternative hypotheses. H 0 : μ __ 66 H a : μ __ 66
We want to test if college students take less than five years to graduate from college, on the average. The null and alternative hypotheses are:
H 0 : μ ≥ 5
H a : μ < 5
We want to test if it takes fewer than 45 minutes to teach a lesson plan. State the null and alternative hypotheses. Fill in the correct symbol ( =, ≠, ≥, <, ≤, >) for the null and alternative hypotheses. H 0 : μ __ 45 H a : μ __ 45
In an issue of U.S. News and World Report , an article on school standards stated that about half of all students in France, Germany, and Israel take advanced placement exams and a third pass. The same article stated that 6.6% of U.S. students take advanced placement exams and 4.4% pass. Test if the percentage of U.S. students who take advanced placement exams is more than 6.6%. State the null and alternative hypotheses.
H 0 : p ≤ 0.066
H a : p > 0.066
On a state driver’s test, about 40% pass the test on the first try. We want to test if more than 40% pass on the first try. Fill in the correct symbol (=, ≠, ≥, <, ≤, >) for the null and alternative hypotheses. H 0 : p __ 0.40 H a : p __ 0.40
In a hypothesis test , sample data is evaluated in order to arrive at a decision about some type of claim. If certain conditions about the sample are satisfied, then the claim can be evaluated for a population. In a hypothesis test, we: Evaluate the null hypothesis , typically denoted with H 0 . The null is not rejected unless the hypothesis test shows otherwise. The null statement must always contain some form of equality (=, ≤ or ≥) Always write the alternative hypothesis , typically denoted with H a or H 1 , using less than, greater than, or not equals symbols, i.e., (≠, >, or <). If we reject the null hypothesis, then we can assume there is enough evidence to support the alternative hypothesis. Never state that a claim is proven true or false. Keep in mind the underlying fact that hypothesis testing is based on probability laws; therefore, we can talk only in terms of non-absolute certainties.
H 0 and H a are contradictory.
Graduate faster
Better quality online classes
Flexible schedule
Access to top-rated instructors
04.28.2023 • 5 min read
Subject Matter Expert
Learn about a null versus alternative hypothesis and what they show with examples for each. Also go over the main differences and similarities between them.
In This Article
What is an alternative hypothesis, outcomes of a hypothesis test.
Main Differences Between Null & Alternative Hypothesis
Similarities Between Null & Alternative Hypothesis
Hypothesis Testing & Errors
In statistics, you’ll draw insights or “inferences” about population parameters using data from a sample. This process is called inferential statistics.
To make statistical inferences, you need to determine if you have enough evidence to support a certain hypothesis about the population. This is where null and alternative hypotheses come into play!
In this article, we’ll explain the differences between these two types of hypotheses, and we’ll explain the role they play in hypothesis testing.
Imagine you want to know what percent of Americans are vegetarians. You find a Gallup poll claiming 5% of the population was vegetarian in 2018, but your intuition tells you vegetarianism is on the rise and that far more than 5% of Americans are vegetarian today.
To investigate further, you collect your own sample data by surveying 1,000 randomly selected Americans. You’ll use this random sample to determine whether it’s likely the true population proportion of vegetarians is, in fact, 5% (as the Gallup data suggests) or whether it could be the case that the percentage of vegetarians is now higher.
Notice that your investigation involves two rival hypotheses about the population. One hypothesis is that the proportion of vegetarians is 5%. The other hypothesis is that the proportion of vegetarians is greater than 5%. In statistics, we would call the first hypothesis the null hypothesis, and the second hypothesis the alternative hypothesis. The null hypothesis ( H 0 H_0 H 0 ) represents the status quo or what is assumed to be true about the population at the start of your investigation.
Null Hypothesis
In hypothesis testing, the null hypothesis ( H 0 H_0 H 0 ) is the default hypothesis.
It's what the status quo assumes to be true about the population.
The alternative hypothesis ( H a H_a H a or H 1 H_1 H 1 ) is the hypothesis that stands contrary to the null hypothesis. The alternative hypothesis represents the research hypothesis—what you as the statistician are trying to prove with your data .
In medical studies, where scientists are trying to demonstrate whether a treatment has a significant effect on patient outcomes, the alternative hypothesis represents the hypothesis that the treatment does have an effect, while the null hypothesis represents the assumption that the treatment has no effect.
Alternative Hypothesis
The alternative hypothesis ( H a H_a H a or H 1 H_1 H 1 ) is the hypothesis being proposed in opposition to the null hypothesis.
In a hypothesis test, the null and alternative hypotheses must be mutually exclusive statements, meaning both hypotheses cannot be true at the same time. For example, if the null hypothesis includes an equal sign, the alternative hypothesis must state that the values being mentioned are “not equal” in some way.
Your hypotheses will also depend on the formulation of your test—are you running a one-sample T-test, a two-sample T-test, F-test for ANOVA , or a Chi-squared test? It also matters whether you are conducting a directional one-tailed test or a nondirectional two-tailed test.
Null Hypothesis: The population mean is equal to some number, x. 𝝁 = x
Alternative Hypothesis: The population mean is not equal to x. 𝝁 ≠ x
Null Hypothesis: The population mean is less than or equal to some number, x. 𝝁 ≤ x Alternative Hypothesis: The population mean is greater than x. 𝝁 > x
Null Hypothesis: The population mean is greater than or equal to some number, x. 𝝁 ≥ x
Alternative Hypothesis: The population mean is less than x. 𝝁 < x
By the end of a hypothesis test, you will have reached one of two conclusions.
You will run into either 2 outcomes:
Fail to reject the null hypothesis on the grounds that there's insufficient evidence to move away from the null hypothesis
Reject the null hypothesis in favor of the alternative.
If you’re confused about the outcomes of a hypothesis test, a good analogy is a jury trial. In a jury trial, the defendant is innocent until proven guilty. To reach a verdict of guilt, the jury must find strong evidence (beyond a reasonable doubt) that the defendant committed the crime.
This is analogous to a statistician who must assume the null hypothesis is true unless they can uncover strong evidence ( a p-value less than or equal to the significance level) in support of the alternative hypothesis.
Notice also, that a jury never concludes a defendant is innocent—only that the defendant is guilty or not guilty. This is similar to how we never conclude that the null hypothesis is true. In a hypothesis test, we never conclude that the null hypothesis is true. We can only “reject” the null hypothesis or “fail to reject” it.
In this video, let’s look at the jury example again, the reasoning behind hypothesis testing, and how to form a test. It starts by stating your null and alternative hypotheses.
Here is a summary of the key differences between the null and the alternative hypothesis test.
The null hypothesis represents the status quo; the alternative hypothesis represents an alternative statement about the population.
The null and the alternative are mutually exclusive statements, meaning both statements cannot be true at the same time.
In a medical study, the null hypothesis represents the assumption that a treatment has no statistically significant effect on the outcome being studied. The alternative hypothesis represents the belief that the treatment does have an effect.
The null hypothesis is denoted by H_0 ; the alternative hypothesis is denoted by H_a H_1
You “fail to reject” the null hypothesis when the p-value is larger than the significance level. You “reject” the null hypothesis in favor of the alternative hypothesis when the p-value is less than or equal to your test’s significance level.
The similarities between the null and alternative hypotheses are as follows.
Both the null and the alternative are statements about the same underlying data.
Both statements provide a possible answer to a statistician’s research question.
The same hypothesis test will provide evidence for or against the null and alternative hypotheses.
Always remember that statistical inference provides you with inferences based on probability rather than hard truths. Anytime you conduct a hypothesis test, there is a chance that you’ll reach the wrong conclusion about your data.
In statistics, we categorize these wrong conclusions into two types of errors:
Type I Errors
Type II Errors
A Type I error occurs when you reject the null hypothesis when, in fact, the null hypothesis is true. This is sometimes called a false positive and is analogous to a jury that falsely convicts an innocent defendant. The probability of making this type of error is represented by alpha, ɑ.
A Type II error occurs when you fail to reject the null hypothesis when, in fact, the null hypothesis is false. This is sometimes called a false negative and is analogous to a jury that reaches a verdict of “not guilty,” when, in fact, the defendant has committed the crime. The probability of making this type of error is represented by beta, ꞵ.
Outlier (from the co-founder of MasterClass) has brought together some of the world's best instructors, game designers, and filmmakers to create the future of online college.
Check out these related courses:
How data describes our world.
Why small choices have big impact.
How money moves our world.
The science of the mind.
Explore degrees of freedom. Learn about their importance, calculation methods, and two test types. Plus dive into solved examples for better understanding.
Learn what is standard error in statistics. This overview explains the definition, the process, the difference with standard deviation, and includes examples.
Learn what quartiles are and how they work in statistics. Understand how to calculate them and why even learn them.
What is the second derivative test [full guide], how to find derivatives in 3 steps, test statistics: definition, formulas & examples, what is the product rule [with examples], understanding math probability - definition, formula & how to find it, what is a residual in stats.
by Marco Taboga , PhD
In a statistical test, observed data is used to decide whether or not to reject a restriction on the data-generating probability distribution.
The assumption that the restriction is true is called null hypothesis , while the statement that the restriction is not true is called alternative hypothesis.
A correct specification of the alternative hypothesis is essential to decide between one-tailed and two-tailed tests.
Table of contents
Choice between one-tailed and two-tailed tests, the critical region, the interpretation of the rejection, the interpretation must be coherent with the alternative hypothesis.
More details, keep reading the glossary.
In order to fully understand the concept of alternative hypothesis, we need to remember the essential elements of a statistical inference problem:
we observe a sample drawn from an unknown probability distribution;
in principle, any valid probability distribution could have generated the sample;
however, we usually place some a priori restrictions on the set of possible data-generating distributions;
A couple of simple examples follow.
When we conduct a statistical test, we formulate a null hypothesis as a restriction on the statistical model.
The alternative hypothesis is
The alternative hypothesis is used to decide whether a test should be one-tailed or two-tailed.
The null hypothesis is rejected if the test statistic falls within a critical region that has been chosen by the statistician.
The critical region is a set of values that may comprise:
only the left tail of the distribution or only the right tail (one-tailed test);
both the left and the right tail (two-tailed test).
The choice of the critical region depends on the alternative hypothesis. Let us see why.
The interpretation is different depending on the tail of the distribution in which the test statistic falls.
The choice between a one-tailed or a two-tailed test needs to be done in such a way that the interpretation of a rejection is always coherent with the alternative hypothesis.
When we deal with the power function of a test, the term "alternative hypothesis" has a special meaning.
We conclude with a caveat about the interpretation of the outcome of a test of hypothesis.
The interpretation of a rejection of the null is controversial.
According to some statisticians, rejecting the null is equivalent to accepting the alternative.
However, others deem that rejecting the null does not necessarily imply accepting the alternative. In fact, it is possible to think of situations in which both hypotheses can be rejected. Let us see why.
According to the conceptual framework illustrated by the images above, there are three possibilities:
the null is true;
the alternative is true;
neither the null nor the alternative is true because the true data-generating distribution has been excluded from the statistical model (we say that the model is mis-specified).
If we are in case 3, accepting the alternative after a rejection of the null is an incorrect decision. Moreover, a second test in which the alternative becomes the new null may lead us to another rejection.
You can find more details about the alternative hypothesis in the lecture on Hypothesis testing .
Previous entry: Almost sure
Next entry: Binomial coefficient
Please cite as:
Taboga, Marco (2021). "Alternative hypothesis", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/glossary/alternative-hypothesis.
Most of the learning materials found on this website are now available in a traditional textbook format.
Statistics By Jim
Making statistics intuitive
By Jim Frost
The alternative hypothesis is one of two mutually exclusive hypotheses in a hypothesis test. The alternative hypothesis states that a population parameter does not equal a specified value. Typically, this value is the null hypothesis value associated with no effect , such as zero. If your sample contains sufficient evidence, you can reject the null hypothesis and favor the alternative hypothesis. The alternative hypothesis is often denoted as H 1 or H A .
If you are performing a two-tailed hypothesis test, the alternative hypothesis states that the population parameter does not equal the null hypothesis value. For example, when the alternative hypothesis is H A : μ ≠ 0, the test can detect differences both greater than and less than the null value.
A one-tailed alternative hypothesis can test for a difference only in one direction. For example, H A : μ > 0 can only test for differences that are greater than zero.
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
To log in and use all the features of Khan Academy, please enable JavaScript in your browser.
Course: ap®︎/college statistics > unit 10.
Hypothesis testing involves the careful construction of two statements: the null hypothesis and the alternative hypothesis. These hypotheses can look very similar but are actually different.
How do we know which hypothesis is the null and which one is the alternative? We will see that there are a few ways to tell the difference.
The null hypothesis reflects that there will be no observed effect in our experiment. In a mathematical formulation of the null hypothesis, there will typically be an equal sign. This hypothesis is denoted by H 0 .
The null hypothesis is what we attempt to find evidence against in our hypothesis test. We hope to obtain a small enough p-value that it is lower than our level of significance alpha and we are justified in rejecting the null hypothesis. If our p-value is greater than alpha, then we fail to reject the null hypothesis.
If the null hypothesis is not rejected, then we must be careful to say what this means. The thinking on this is similar to a legal verdict. Just because a person has been declared "not guilty", it does not mean that he is innocent. In the same way, just because we failed to reject a null hypothesis it does not mean that the statement is true.
For example, we may want to investigate the claim that despite what convention has told us, the mean adult body temperature is not the accepted value of 98.6 degrees Fahrenheit . The null hypothesis for an experiment to investigate this is “The mean adult body temperature for healthy individuals is 98.6 degrees Fahrenheit.” If we fail to reject the null hypothesis, then our working hypothesis remains that the average adult who is healthy has a temperature of 98.6 degrees. We do not prove that this is true.
If we are studying a new treatment, the null hypothesis is that our treatment will not change our subjects in any meaningful way. In other words, the treatment will not produce any effect in our subjects.
The alternative or experimental hypothesis reflects that there will be an observed effect for our experiment. In a mathematical formulation of the alternative hypothesis, there will typically be an inequality, or not equal to symbol. This hypothesis is denoted by either H a or by H 1 .
The alternative hypothesis is what we are attempting to demonstrate in an indirect way by the use of our hypothesis test. If the null hypothesis is rejected, then we accept the alternative hypothesis. If the null hypothesis is not rejected, then we do not accept the alternative hypothesis. Going back to the above example of mean human body temperature, the alternative hypothesis is “The average adult human body temperature is not 98.6 degrees Fahrenheit.”
If we are studying a new treatment, then the alternative hypothesis is that our treatment does, in fact, change our subjects in a meaningful and measurable way.
The following set of negations may help when you are forming your null and alternative hypotheses. Most technical papers rely on just the first formulation, even though you may see some of the others in a statistics textbook.
In statistical hypothesis testing, the alternative hypothesis is an important proposition in the hypothesis test. The goal of the hypothesis test is to demonstrate that in the given condition, there is sufficient evidence supporting the credibility of the alternative hypothesis instead of the default assumption made by the null hypothesis.
Alternative Hypotheses
Both hypotheses include statements with the same purpose of providing the researcher with a basic guideline. The researcher uses the statement from each hypothesis to guide their research. In statistics, alternative hypothesis is often denoted as H a or H 1 .
Table of Content
Alternative hypothesis, types of alternative hypothesis, difference between null and alternative hypothesis, formulating an alternative hypothesis, example of alternative hypothesis, application of alternative hypothesis.
“A hypothesis is a statement of a relationship between two or more variables.” It is a working statement or theory that is based on insufficient evidence.
While experimenting, researchers often make a claim, that they can test. These claims are often based on the relationship between two or more variables. “What causes what?” and “Up to what extent?” are a few of the questions that a hypothesis focuses on answering. The hypothesis can be true or false, based on complete evidence.
While there are different hypotheses, we discuss only null and alternate hypotheses. The null hypothesis, denoted H o , is the default position where variables do not have a relation with each other. That means the null hypothesis is assumed true until evidence indicates otherwise. The alternative hypothesis, denoted H 1 , on the other hand, opposes the null hypothesis. It assumes a relation between the variables and serves as evidence to reject the null hypothesis.
Example of Hypothesis:
Mean age of all college students is 20.4 years. (simple hypothesis).
An Alternative Hypothesis is a claim or a complement to the null hypothesis. If the null hypothesis predicts a statement to be true, the Alternative Hypothesis predicts it to be false. Let’s say the null hypothesis states there is no difference between height and shoe size then the alternative hypothesis will oppose the claim by stating that there is a relation.
We see that the null hypothesis assumes no relationship between the variables whereas an alternative hypothesis proposes a significant relation between variables. An alternative theory is the one tested by the researcher and if the researcher gathers enough data to support it, then the alternative hypothesis replaces the null hypothesis.
Null and alternative hypotheses are exhaustive, meaning that together they cover every possible outcome. They are also mutually exclusive, meaning that only one can be true at a time.
There are a few types of alternative hypothesis that we will see:
1. One-tailed test H 1 : A one-tailed alternative hypothesis focuses on only one region of rejection of the sampling distribution. The region of rejection can be upper or lower.
2. Two-tailed test H 1 : A two-tailed alternative hypothesis is concerned with both regions of rejection of the sampling distribution.
3. Non-directional test H 1 : A non-directional alternative hypothesis is not concerned with either region of rejection; rather, it is only concerned that null hypothesis is not true.
4. Point test H 1 : Point alternative hypotheses occur when the hypothesis test is framed so that the population distribution under the alternative hypothesis is a fully defined distribution, with no unknown parameters; such hypotheses are usually of no practical interest but are fundamental to theoretical considerations of statistical inference and are the basis of the Neyman–Pearson lemma.
the differences between Null Hypothesis and Alternative Hypothesis is explained in the table below:
Null Hypothesis(H ) | Alternative Hypothesis(H ) | |
---|---|---|
Definition | A default statement that states no relationship between variables. | A claim that assumes a relationship between variables. |
Denoted by | H | H or H |
In Research | States a presumption made before-hand | States the potential outcome a researcher may expect |
Symbols Used | Equality Symbol (=, ≥, or ≤) | Inequality Symbol (≠, <, or >) |
Example | Experience matters in a tech-job | Experience does not matter in a tech-job |
Formulating an alternative hypothesis means identifying the relationships, effects or condition being studied. Based on the data we conclude that there is a different inference from the null-hypothesis being considered.
Alternative hypothesis must be true when the null hypothesis is false. When trying to identify the information need for alternate hypothesis statement, look for the following phrases:
When alternative hypotheses in mathematical terms, they always include an inequality ( usually ≠, but sometimes < or >) . When writing the alternate hypothesis, make sure it never includes an “=” symbol.
To help you write your hypotheses, you can use the template sentences below.
Does independent variable affect dependent variable?
Various examples of Alternative Hypothesis includes:
Two-Tailed Example
One-Tailed Example
Some applications of Alternative Hypothesis includes:
We defined the relationship that exist between null-hypothesis and alternative hypothesis. While the null hypothesis is always a default assumption about our test data, the alternative hypothesis puts in all the effort to make sure the null hypothesis is disproved.
Null-hypothesis always explores new relationships between the independent variables to find potential outcomes from our test data. We should note that for every null hypothesis, one or more alternate hypotheses can be developed.
Also Check:
Mathematics Maths Formulas Branches of Mathematics
What is hypothesis.
A hypothesis is a statement of a relationship between two or more variables.” It is a working statement or theory that is based on insufficient evidence.
Alternative hypothesis, denoted by H 1 , opposes the null-hypothesis. It assumes a relation between the variables and serves as an evidence to reject the null-hypothesis.
Null hypothesis is the default claim that assumes no relationship between variables while alternative hypothesis is the opposite claim which considers statistical significance between the variables.
Null hypothesis (H 0 ) states there is no effect or difference, while the alternative hypothesis (H 1 or H a ) asserts the presence of an effect, difference, or relationship between variables. In hypothesis testing, we seek evidence to either reject the null hypothesis in favor of the alternative hypothesis or fail to do so.
Similar reads.
What are null and alternative hypotheses.
Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.
As the degrees of freedom increase, Student’s t distribution becomes less leptokurtic , meaning that the probability of extreme values decreases. The distribution becomes more and more similar to a standard normal distribution .
The three categories of kurtosis are:
Probability distributions belong to two broad categories: discrete probability distributions and continuous probability distributions . Within each category, there are many types of probability distributions.
Probability is the relative frequency over an infinite number of trials.
For example, the probability of a coin landing on heads is .5, meaning that if you flip the coin an infinite number of times, it will land on heads half the time.
Since doing something an infinite number of times is impossible, relative frequency is often used as an estimate of probability. If you flip a coin 1000 times and get 507 heads, the relative frequency, .507, is a good estimate of the probability.
Categorical variables can be described by a frequency distribution. Quantitative variables can also be described by a frequency distribution, but first they need to be grouped into interval classes .
A histogram is an effective way to tell if a frequency distribution appears to have a normal distribution .
Plot a histogram and look at the shape of the bars. If the bars roughly follow a symmetrical bell or hill shape, like the example below, then the distribution is approximately normally distributed.
You can use the CHISQ.INV.RT() function to find a chi-square critical value in Excel.
For example, to calculate the chi-square critical value for a test with df = 22 and α = .05, click any blank cell and type:
=CHISQ.INV.RT(0.05,22)
You can use the qchisq() function to find a chi-square critical value in R.
For example, to calculate the chi-square critical value for a test with df = 22 and α = .05:
qchisq(p = .05, df = 22, lower.tail = FALSE)
You can use the chisq.test() function to perform a chi-square test of independence in R. Give the contingency table as a matrix for the “x” argument. For example:
m = matrix(data = c(89, 84, 86, 9, 8, 24), nrow = 3, ncol = 2)
chisq.test(x = m)
You can use the CHISQ.TEST() function to perform a chi-square test of independence in Excel. It takes two arguments, CHISQ.TEST(observed_range, expected_range), and returns the p value.
Chi-square goodness of fit tests are often used in genetics. One common application is to check if two genes are linked (i.e., if the assortment is independent). When genes are linked, the allele inherited for one gene affects the allele inherited for another gene.
Suppose that you want to know if the genes for pea texture (R = round, r = wrinkled) and color (Y = yellow, y = green) are linked. You perform a dihybrid cross between two heterozygous ( RY / ry ) pea plants. The hypotheses you’re testing with your experiment are:
You observe 100 peas:
To calculate the expected values, you can make a Punnett square. If the two genes are unlinked, the probability of each genotypic combination is equal.
RRYY | RrYy | RRYy | RrYY | |
RrYy | rryy | Rryy | rrYy | |
RRYy | Rryy | RRyy | RrYy | |
RrYY | rrYy | RrYy | rrYY |
The expected phenotypic ratios are therefore 9 round and yellow: 3 round and green: 3 wrinkled and yellow: 1 wrinkled and green.
From this, you can calculate the expected phenotypic frequencies for 100 peas:
Round and yellow | 78 | 100 * (9/16) = 56.25 |
Round and green | 6 | 100 * (3/16) = 18.75 |
Wrinkled and yellow | 4 | 100 * (3/16) = 18.75 |
Wrinkled and green | 12 | 100 * (1/16) = 6.21 |
− | − | ||||
Round and yellow | 78 | 56.25 | 21.75 | 473.06 | 8.41 |
Round and green | 6 | 18.75 | −12.75 | 162.56 | 8.67 |
Wrinkled and yellow | 4 | 18.75 | −14.75 | 217.56 | 11.6 |
Wrinkled and green | 12 | 6.21 | 5.79 | 33.52 | 5.4 |
Χ 2 = 8.41 + 8.67 + 11.6 + 5.4 = 34.08
Since there are four groups (round and yellow, round and green, wrinkled and yellow, wrinkled and green), there are three degrees of freedom .
For a test of significance at α = .05 and df = 3, the Χ 2 critical value is 7.82.
Χ 2 = 34.08
Critical value = 7.82
The Χ 2 value is greater than the critical value .
The Χ 2 value is greater than the critical value, so we reject the null hypothesis that the population of offspring have an equal probability of inheriting all possible genotypic combinations. There is a significant difference between the observed and expected genotypic frequencies ( p < .05).
The data supports the alternative hypothesis that the offspring do not have an equal probability of inheriting all possible genotypic combinations, which suggests that the genes are linked
You can use the chisq.test() function to perform a chi-square goodness of fit test in R. Give the observed values in the “x” argument, give the expected values in the “p” argument, and set “rescale.p” to true. For example:
chisq.test(x = c(22,30,23), p = c(25,25,25), rescale.p = TRUE)
You can use the CHISQ.TEST() function to perform a chi-square goodness of fit test in Excel. It takes two arguments, CHISQ.TEST(observed_range, expected_range), and returns the p value .
Both correlations and chi-square tests can test for relationships between two variables. However, a correlation is used when you have two quantitative variables and a chi-square test of independence is used when you have two categorical variables.
Both chi-square tests and t tests can test for differences between two groups. However, a t test is used when you have a dependent quantitative variable and an independent categorical variable (with two groups). A chi-square test of independence is used when you have two categorical variables.
The two main chi-square tests are the chi-square goodness of fit test and the chi-square test of independence .
A chi-square distribution is a continuous probability distribution . The shape of a chi-square distribution depends on its degrees of freedom , k . The mean of a chi-square distribution is equal to its degrees of freedom ( k ) and the variance is 2 k . The range is 0 to ∞.
As the degrees of freedom ( k ) increases, the chi-square distribution goes from a downward curve to a hump shape. As the degrees of freedom increases further, the hump goes from being strongly right-skewed to being approximately normal.
To find the quartiles of a probability distribution, you can use the distribution’s quantile function.
You can use the quantile() function to find quartiles in R. If your data is called “data”, then “quantile(data, prob=c(.25,.5,.75), type=1)” will return the three quartiles.
You can use the QUARTILE() function to find quartiles in Excel. If your data is in column A, then click any blank cell and type “=QUARTILE(A:A,1)” for the first quartile, “=QUARTILE(A:A,2)” for the second quartile, and “=QUARTILE(A:A,3)” for the third quartile.
You can use the PEARSON() function to calculate the Pearson correlation coefficient in Excel. If your variables are in columns A and B, then click any blank cell and type “PEARSON(A:A,B:B)”.
There is no function to directly test the significance of the correlation.
You can use the cor() function to calculate the Pearson correlation coefficient in R. To test the significance of the correlation, you can use the cor.test() function.
You should use the Pearson correlation coefficient when (1) the relationship is linear and (2) both variables are quantitative and (3) normally distributed and (4) have no outliers.
The Pearson correlation coefficient ( r ) is the most common way of measuring a linear correlation. It is a number between –1 and 1 that measures the strength and direction of the relationship between two variables.
This table summarizes the most important differences between normal distributions and Poisson distributions :
Characteristic | Normal | Poisson |
---|---|---|
Continuous | ||
Mean (µ) and standard deviation (σ) | Lambda (λ) | |
Shape | Bell-shaped | Depends on λ |
Symmetrical | Asymmetrical (right-skewed). As λ increases, the asymmetry decreases. | |
Range | −∞ to ∞ | 0 to ∞ |
When the mean of a Poisson distribution is large (>10), it can be approximated by a normal distribution.
In the Poisson distribution formula, lambda (λ) is the mean number of events within a given interval of time or space. For example, λ = 0.748 floods per year.
The e in the Poisson distribution formula stands for the number 2.718. This number is called Euler’s constant. You can simply substitute e with 2.718 when you’re calculating a Poisson probability. Euler’s constant is a very useful number and is especially important in calculus.
The three types of skewness are:
Skewness and kurtosis are both important measures of a distribution’s shape.
A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (“ x affects y because …”).
A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses . In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.
The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).
The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).
The t distribution was first described by statistician William Sealy Gosset under the pseudonym “Student.”
To calculate a confidence interval of a mean using the critical value of t , follow these four steps:
To test a hypothesis using the critical value of t , follow these four steps:
You can use the T.INV() function to find the critical value of t for one-tailed tests in Excel, and you can use the T.INV.2T() function for two-tailed tests.
You can use the qt() function to find the critical value of t in R. The function gives the critical value of t for the one-tailed test. If you want the critical value of t for a two-tailed test, divide the significance level by two.
You can use the RSQ() function to calculate R² in Excel. If your dependent variable is in column A and your independent variable is in column B, then click any blank cell and type “RSQ(A:A,B:B)”.
You can use the summary() function to view the R² of a linear model in R. You will see the “R-squared” near the bottom of the output.
There are two formulas you can use to calculate the coefficient of determination (R²) of a simple linear regression .
The coefficient of determination (R²) is a number between 0 and 1 that measures how well a statistical model predicts an outcome. You can interpret the R² as the proportion of variation in the dependent variable that is predicted by the statistical model.
There are three main types of missing data .
Missing completely at random (MCAR) data are randomly distributed across the variable and unrelated to other variables .
Missing at random (MAR) data are not randomly distributed but they are accounted for by other observed variables.
Missing not at random (MNAR) data systematically differ from the observed values.
To tidy up your missing data , your options usually include accepting, removing, or recreating the missing data.
Missing data are important because, depending on the type, they can sometimes bias your results. This means your results may not be generalizable outside of your study because your data come from an unrepresentative sample .
Missing data , or missing values, occur when you don’t have data stored for certain variables or participants.
In any dataset, there’s usually some missing data. In quantitative research , missing values appear as blank cells in your spreadsheet.
There are two steps to calculating the geometric mean :
Before calculating the geometric mean, note that:
The arithmetic mean is the most commonly used type of mean and is often referred to simply as “the mean.” While the arithmetic mean is based on adding and dividing values, the geometric mean multiplies and finds the root of values.
Even though the geometric mean is a less common measure of central tendency , it’s more accurate than the arithmetic mean for percentage change and positively skewed data. The geometric mean is often reported for financial indices and population growth rates.
The geometric mean is an average that multiplies all values and finds a root of the number. For a dataset with n numbers, you find the n th root of their product.
Outliers are extreme values that differ from most values in the dataset. You find outliers at the extreme ends of your dataset.
It’s best to remove outliers only when you have a sound reason for doing so.
Some outliers represent natural variations in the population , and they should be left as is in your dataset. These are called true outliers.
Other outliers are problematic and should be removed because they represent measurement errors , data entry or processing errors, or poor sampling.
You can choose from four main ways to detect outliers :
Outliers can have a big impact on your statistical analyses and skew the results of any hypothesis test if they are inaccurate.
These extreme values can impact your statistical power as well, making it hard to detect a true effect if there is one.
No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.
To find the slope of the line, you’ll need to perform a regression analysis .
Correlation coefficients always range between -1 and 1.
The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.
The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.
These are the assumptions your data must meet if you want to use Pearson’s r :
A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.
Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.
There are various ways to improve power:
A power analysis is a calculation that helps you determine a minimum sample size for your study. It’s made up of four main components. If you know or have estimates for any three of these, you can calculate the fourth component.
Statistical analysis is the main method for analyzing quantitative research data . It uses probabilities and models to test predictions about a population from sample data.
The risk of making a Type II error is inversely related to the statistical power of a test. Power is the extent to which a test can correctly detect a real effect when there is one.
To (indirectly) reduce the risk of a Type II error, you can increase the sample size or the significance level to increase statistical power.
The risk of making a Type I error is the significance level (or alpha) that you choose. That’s a value that you set at the beginning of your study to assess the statistical probability of obtaining your results ( p value ).
The significance level is usually set at 0.05 or 5%. This means that your results only have a 5% chance of occurring, or less, if the null hypothesis is actually true.
To reduce the Type I error probability, you can set a lower significance level.
In statistics, a Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s actually false.
In statistics, power refers to the likelihood of a hypothesis test detecting a true effect if there is one. A statistically powerful test is more likely to reject a false negative (a Type II error).
If you don’t ensure enough power in your study, you may not be able to detect a statistically significant result even when it has practical significance. Your study might not have the ability to answer your research question.
While statistical significance shows that an effect exists in a study, practical significance shows that the effect is large enough to be meaningful in the real world.
Statistical significance is denoted by p -values whereas practical significance is represented by effect sizes .
There are dozens of measures of effect sizes . The most common effect sizes are Cohen’s d and Pearson’s r . Cohen’s d measures the size of the difference between two groups while Pearson’s r measures the strength of the relationship between two variables .
Effect size tells you how meaningful the relationship between variables or the difference between groups is.
A large effect size means that a research finding has practical significance, while a small effect size indicates limited practical applications.
Using descriptive and inferential statistics , you can make two types of estimates about the population : point estimates and interval estimates.
Both types of estimates are important for gathering a clear idea of where a parameter is likely to lie.
Standard error and standard deviation are both measures of variability . The standard deviation reflects variability within a sample, while the standard error estimates the variability across samples of a population.
The standard error of the mean , or simply standard error , indicates how different the population mean is likely to be from a sample mean. It tells you how much the sample mean would vary if you were to repeat a study using new samples from within a single population.
To figure out whether a given number is a parameter or a statistic , ask yourself the following:
If the answer is yes to both questions, the number is likely to be a parameter. For small populations, data can be collected from the whole population and summarized in parameters.
If the answer is no to either of the questions, then the number is more likely to be a statistic.
The arithmetic mean is the most commonly used mean. It’s often simply called the mean or the average. But there are some other types of means you can calculate depending on your research purposes:
You can find the mean , or average, of a data set in two simple steps:
This method is the same whether you are dealing with sample or population data or positive or negative numbers.
The median is the most informative measure of central tendency for skewed distributions or distributions with outliers. For example, the median is often used as a measure of central tendency for income distributions, which are generally highly skewed.
Because the median only uses one or two values, it’s unaffected by extreme outliers or non-symmetric distributions of scores. In contrast, the mean and mode can vary in skewed distributions.
To find the median , first order your data. Then calculate the middle position based on n , the number of values in your data set.
A data set can often have no mode, one mode or more than one mode – it all depends on how many different values repeat most frequently.
Your data can be:
To find the mode :
Then you simply need to identify the most frequently occurring value.
The interquartile range is the best measure of variability for skewed distributions or data sets with outliers. Because it’s based on values that come from the middle half of the distribution, it’s unlikely to be influenced by outliers .
The two most common methods for calculating interquartile range are the exclusive and inclusive methods.
The exclusive method excludes the median when identifying Q1 and Q3, while the inclusive method includes the median as a value in the data set in identifying the quartiles.
For each of these methods, you’ll need different procedures for finding the median, Q1 and Q3 depending on whether your sample size is even- or odd-numbered. The exclusive method works best for even-numbered sample sizes, while the inclusive method is often used with odd-numbered sample sizes.
While the range gives you the spread of the whole data set, the interquartile range gives you the spread of the middle half of a data set.
Homoscedasticity, or homogeneity of variances, is an assumption of equal or similar variances in different groups being compared.
This is an important assumption of parametric statistical tests because they are sensitive to any dissimilarities. Uneven variances in samples result in biased and skewed test results.
Statistical tests such as variance tests or the analysis of variance (ANOVA) use sample variance to assess group differences of populations. They use the variances of the samples to assess whether the populations they come from significantly differ from each other.
Variance is the average squared deviations from the mean, while standard deviation is the square root of this number. Both measures reflect variability in a distribution, but their units differ:
Although the units of variance are harder to intuitively understand, variance is important in statistical tests .
The empirical rule, or the 68-95-99.7 rule, tells you where most of the values lie in a normal distribution :
The empirical rule is a quick way to get an overview of your data and check for any outliers or extreme values that don’t follow this pattern.
In a normal distribution , data are symmetrically distributed with no skew. Most values cluster around a central region, with values tapering off as they go further away from the center.
The measures of central tendency (mean, mode, and median) are exactly the same in a normal distribution.
The standard deviation is the average amount of variability in your data set. It tells you, on average, how far each score lies from the mean .
In normal distributions, a high standard deviation means that values are generally far from the mean, while a low standard deviation indicates that values are clustered close to the mean.
No. Because the range formula subtracts the lowest number from the highest number, the range is always zero or a positive number.
In statistics, the range is the spread of your data from the lowest to the highest value in the distribution. It is the simplest measure of variability .
While central tendency tells you where most of your data points lie, variability summarizes how far apart your points from each other.
Data sets can have the same central tendency but different levels of variability or vice versa . Together, they give you a complete picture of your data.
Variability is most commonly measured with the following descriptive statistics :
Variability tells you how far apart points lie from each other and from the center of a distribution or a data set.
Variability is also referred to as spread, scatter or dispersion.
While interval and ratio data can both be categorized, ranked, and have equal spacing between adjacent values, only ratio scales have a true zero.
For example, temperature in Celsius or Fahrenheit is at an interval scale because zero is not the lowest possible temperature. In the Kelvin scale, a ratio scale, zero represents a total lack of thermal energy.
A critical value is the value of the test statistic which defines the upper and lower bounds of a confidence interval , or which defines the threshold of statistical significance in a statistical test. It describes how far from the mean of the distribution you have to go to cover a certain amount of the total variation in the data (i.e. 90%, 95%, 99%).
If you are constructing a 95% confidence interval and are using a threshold of statistical significance of p = 0.05, then your critical value will be identical in both cases.
The t -distribution gives more probability to observations in the tails of the distribution than the standard normal distribution (a.k.a. the z -distribution).
In this way, the t -distribution is more conservative than the standard normal distribution: to reach the same level of confidence or statistical significance , you will need to include a wider range of the data.
A t -score (a.k.a. a t -value) is equivalent to the number of standard deviations away from the mean of the t -distribution .
The t -score is the test statistic used in t -tests and regression tests. It can also be used to describe how far from the mean an observation is when the data follow a t -distribution.
The t -distribution is a way of describing a set of observations where most observations fall close to the mean , and the rest of the observations make up the tails on either side. It is a type of normal distribution used for smaller sample sizes, where the variance in the data is unknown.
The t -distribution forms a bell curve when plotted on a graph. It can be described mathematically using the mean and the standard deviation .
In statistics, ordinal and nominal variables are both considered categorical variables .
Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.
Ordinal data has two characteristics:
However, unlike with interval data, the distances between the categories are uneven or unknown.
Nominal and ordinal are two of the four levels of measurement . Nominal level data can only be classified, while ordinal level data can be classified and ordered.
Nominal data is data that can be labelled or classified into mutually exclusive categories within a variable. These categories cannot be ordered in a meaningful way.
For example, for the nominal variable of preferred mode of transportation, you may have the categories of car, bus, train, tram or bicycle.
If your confidence interval for a difference between groups includes zero, that means that if you run your experiment again you have a good chance of finding no difference between groups.
If your confidence interval for a correlation or regression includes zero, that means that if you run your experiment again there is a good chance of finding no correlation in your data.
In both of these cases, you will also find a high p -value when you run your statistical test, meaning that your results could have occurred under the null hypothesis of no relationship between variables or no difference between groups.
If you want to calculate a confidence interval around the mean of data that is not normally distributed , you have two choices:
The standard normal distribution , also called the z -distribution, is a special normal distribution where the mean is 0 and the standard deviation is 1.
Any normal distribution can be converted into the standard normal distribution by turning the individual values into z -scores. In a z -distribution, z -scores tell you how many standard deviations away from the mean each value lies.
The z -score and t -score (aka z -value and t -value) show how many standard deviations away from the mean of the distribution you are, assuming your data follow a z -distribution or a t -distribution .
These scores are used in statistical tests to show how far from the mean of the predicted distribution your statistical estimate is. If your test produces a z -score of 2.5, this means that your estimate is 2.5 standard deviations from the predicted mean.
The predicted mean and distribution of your estimate are generated by the null hypothesis of the statistical test you are using. The more standard deviations away from the predicted mean your estimate is, the less likely it is that the estimate could have occurred under the null hypothesis .
To calculate the confidence interval , you need to know:
Then you can plug these components into the confidence interval formula that corresponds to your data. The formula depends on the type of estimate (e.g. a mean or a proportion) and on the distribution of your data.
The confidence level is the percentage of times you expect to get close to the same estimate if you run your experiment again or resample the population in the same way.
The confidence interval consists of the upper and lower bounds of the estimate you expect to find at a given level of confidence.
For example, if you are estimating a 95% confidence interval around the mean proportion of female babies born every year based on a random sample of babies, you might find an upper bound of 0.56 and a lower bound of 0.48. These are the upper and lower bounds of the confidence interval. The confidence level is 95%.
The mean is the most frequently used measure of central tendency because it uses all values in the data set to give you an average.
For data from skewed distributions, the median is better than the mean because it isn’t influenced by extremely large values.
The mode is the only measure you can use for nominal or categorical data that can’t be ordered.
The measures of central tendency you can use depends on the level of measurement of your data.
Measures of central tendency help you find the middle, or the average, of a data set.
The 3 most common measures of central tendency are the mean, median and mode.
Some variables have fixed levels. For example, gender and ethnicity are always nominal level data because they cannot be ranked.
However, for other variables, you can choose the level of measurement . For example, income is a variable that can be recorded on an ordinal or a ratio scale:
If you have a choice, the ratio level is always preferable because you can analyze data in more ways. The higher the level of measurement, the more precise your data is.
The level at which you measure a variable determines how you can analyze your data.
Depending on the level of measurement , you can perform different descriptive statistics to get an overall summary of your data and inferential statistics to see if your results support or refute your hypothesis .
Levels of measurement tell you how precisely variables are recorded. There are 4 levels of measurement, which can be ranked from low to high:
No. The p -value only tells you how likely the data you have observed is to have occurred under the null hypothesis .
If the p -value is below your threshold of significance (typically p < 0.05), then you can reject the null hypothesis, but this does not necessarily mean that your alternative hypothesis is true.
The alpha value, or the threshold for statistical significance , is arbitrary – which value you use depends on your field of study.
In most cases, researchers use an alpha of 0.05, which means that there is a less than 5% chance that the data being tested could have occurred under the null hypothesis.
P -values are usually automatically calculated by the program you use to perform your statistical test. They can also be estimated using p -value tables for the relevant test statistic .
P -values are calculated from the null distribution of the test statistic. They tell you how often a test statistic is expected to occur under the null hypothesis of the statistical test, based on where it falls in the null distribution.
If the test statistic is far from the mean of the null distribution, then the p -value will be small, showing that the test statistic is not likely to have occurred under the null hypothesis.
A p -value , or probability value, is a number describing how likely it is that your data would have occurred under the null hypothesis of your statistical test .
The test statistic you use will be determined by the statistical test.
You can choose the right statistical test by looking at what type of data you have collected and what type of relationship you want to test.
The test statistic will change based on the number of observations in your data, how variable your observations are, and how strong the underlying patterns in the data are.
For example, if one data set has higher variability while another has lower variability, the first data set will produce a test statistic closer to the null hypothesis , even if the true correlation between two variables is the same in either data set.
The formula for the test statistic depends on the statistical test being used.
Generally, the test statistic is calculated as the pattern in your data (i.e. the correlation between variables or difference between groups) divided by the variance in the data (i.e. the standard deviation ).
The 3 main types of descriptive statistics concern the frequency distribution, central tendency, and variability of a dataset.
Descriptive statistics summarize the characteristics of a data set. Inferential statistics allow you to test a hypothesis or assess whether your data is generalizable to the broader population.
In statistics, model selection is a process researchers use to compare the relative value of different statistical models and determine which one is the best fit for the observed data.
The Akaike information criterion is one of the most common methods of model selection. AIC weights the ability of the model to predict the observed data against the number of parameters the model requires to reach that level of precision.
AIC model selection can help researchers find a model that explains the observed variation in their data while avoiding overfitting.
In statistics, a model is the collection of one or more independent variables and their predicted interactions that researchers use to try to explain variation in their dependent variable.
You can test a model using a statistical test . To compare how well different models fit your data, you can use Akaike’s information criterion for model selection.
The Akaike information criterion is calculated from the maximum log-likelihood of the model and the number of parameters (K) used to reach that likelihood. The AIC function is 2K – 2(log-likelihood) .
Lower AIC values indicate a better-fit model, and a model with a delta-AIC (the difference between the two AIC values being compared) of more than -2 is considered significantly better than the model it is being compared to.
The Akaike information criterion is a mathematical test used to evaluate how well a model fits the data it is meant to describe. It penalizes models which use more independent variables (parameters) as a way to avoid over-fitting.
AIC is most often used to compare the relative goodness-of-fit among different models under consideration and to then choose the model that best fits the data.
A factorial ANOVA is any ANOVA that uses more than one categorical independent variable . A two-way ANOVA is a type of factorial ANOVA.
Some examples of factorial ANOVAs include:
In ANOVA, the null hypothesis is that there is no difference among group means. If any group differs significantly from the overall group mean, then the ANOVA will report a statistically significant result.
Significant differences among group means are calculated using the F statistic, which is the ratio of the mean sum of squares (the variance explained by the independent variable) to the mean square error (the variance left over).
If the F statistic is higher than the critical value (the value of F that corresponds with your alpha value, usually 0.05), then the difference among groups is deemed statistically significant.
The only difference between one-way and two-way ANOVA is the number of independent variables . A one-way ANOVA has one independent variable, while a two-way ANOVA has two.
All ANOVAs are designed to test for differences among three or more groups. If you are only testing for a difference between two groups, use a t-test instead.
Multiple linear regression is a regression model that estimates the relationship between a quantitative dependent variable and two or more independent variables using a straight line.
Linear regression most often uses mean-square error (MSE) to calculate the error of the model. MSE is calculated by:
Linear regression fits a line to the data by finding the regression coefficient that results in the smallest MSE.
Simple linear regression is a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line. Both variables should be quantitative.
For example, the relationship between temperature and the expansion of mercury in a thermometer can be modeled using a straight line: as temperature increases, the mercury expands. This linear relationship is so certain that we can use mercury thermometers to measure temperature.
A regression model is a statistical model that estimates the relationship between one dependent variable and one or more independent variables using a line (or a plane in the case of two or more independent variables).
A regression model can be used when the dependent variable is quantitative, except in the case of logistic regression, where the dependent variable is binary.
A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.
If you want to compare the means of several groups at once, it’s best to use another statistical test such as ANOVA or a post-hoc test.
A one-sample t-test is used to compare a single population to a standard value (for example, to determine whether the average lifespan of a specific town is different from the country average).
A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time (for example, measuring student performance on a test before and after being taught the material).
A t-test measures the difference in group means divided by the pooled standard error of the two group means.
In this way, it calculates a number (the t-value) illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance (p-value).
Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.
If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test .
If you want to know only whether a difference exists, use a two-tailed test . If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test .
A t-test is a statistical test that compares the means of two samples . It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.
Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test . Significance is usually denoted by a p -value , or probability value.
Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis .
When the p -value falls below the chosen alpha value, then we say the result of the test is statistically significant.
A test statistic is a number calculated by a statistical test . It describes how far your observed data is from the null hypothesis of no relationship between variables or no difference among sample groups.
The test statistic tells you how different two or more groups are from the overall population mean , or how different a linear slope is from the slope predicted by a null hypothesis . Different test statistics are used in different statistical tests.
Statistical tests commonly assume that:
If your data does not meet these assumptions you might still be able to use a nonparametric statistical test , which have fewer requirements but also make weaker inferences.
Want to contact us directly? No problem. We are always here for you.
Our team helps students graduate by offering:
Scribbr specializes in editing study-related documents . We proofread:
Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .
The add-on AI detector is powered by Scribbr’s proprietary software.
The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.
You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .
Alternative hypothesis defines there is a statistically important relationship between two variables. Whereas null hypothesis states there is no statistical relationship between the two variables. In statistics, we usually come across various kinds of hypotheses. A statistical hypothesis is supposed to be a working statement which is assumed to be logical with given data. It should be noticed that a hypothesis is neither considered true nor false.
The alternative hypothesis is a statement used in statistical inference experiment. It is contradictory to the null hypothesis and denoted by H a or H 1 . We can also say that it is simply an alternative to the null. In hypothesis testing, an alternative theory is a statement which a researcher is testing. This statement is true from the researcher’s point of view and ultimately proves to reject the null to replace it with an alternative assumption. In this hypothesis, the difference between two or more variables is predicted by the researchers, such that the pattern of data observed in the test is not due to chance.
To check the water quality of a river for one year, the researchers are doing the observation. As per the null hypothesis, there is no change in water quality in the first half of the year as compared to the second half. But in the alternative hypothesis, the quality of water is poor in the second half when observed.
|
|
It denotes there is no relationship between two measured phenomena. | It’s a hypothesis that a random cause may influence the observed data or sample. |
It is represented by H | It is represented by H or H |
Example: Rohan will win at least Rs.100000 in lucky draw. | Example: Rohan will win less than Rs.100000 in lucky draw. |
Basically, there are three types of the alternative hypothesis, they are;
Left-Tailed : Here, it is expected that the sample proportion (π) is less than a specified value which is denoted by π 0 , such that;
H 1 : π < π 0
Right-Tailed: It represents that the sample proportion (π) is greater than some value, denoted by π 0 .
H 1 : π > π 0
Two-Tailed: According to this hypothesis, the sample proportion (denoted by π) is not equal to a specific value which is represented by π 0 .
H 1 : π ≠ π 0
Note: The null hypothesis for all the three alternative hypotheses, would be H 1 : π = π 0 .
MATHS Related Links | |
Register with byju's & watch live videos.
The actual test begins by considering two hypotheses . They are called the null hypothesis and the alternative hypothesis . These hypotheses contain opposing viewpoints.
H 0 : The null hypothesis: It is a statement of no difference between the variables—they are not related. This can often be considered the status quo and as a result if you cannot accept the null it requires some action.
H a : The alternative hypothesis: It is a claim about the population that is contradictory to H 0 and what we conclude when we reject H 0 . This is usually what the researcher is trying to prove.
Since the null and alternative hypotheses are contradictory, you must examine evidence to decide if you have enough evidence to reject the null hypothesis or not. The evidence is in the form of sample data.
After you have determined which hypothesis the sample supports, you make a decision. There are two options for a decision. They are "reject H 0 " if the sample information favors the alternative hypothesis or "do not reject H 0 " or "decline to reject H 0 " if the sample information is insufficient to reject the null hypothesis.
Mathematical Symbols Used in H 0 and H a :
equal (=) | not equal (≠) greater than (>) less than (<) |
greater than or equal to (≥) | less than (<) |
less than or equal to (≤) | more than (>) |
H 0 always has a symbol with an equal in it. H a never has a symbol with an equal in it. The choice of symbol depends on the wording of the hypothesis test. However, be aware that many researchers (including one of the co-authors in research work) use = in the null hypothesis, even with > or < as the symbol in the alternative hypothesis. This practice is acceptable because we only make the decision to reject or not reject the null hypothesis.
H 0 : No more than 30% of the registered voters in Santa Clara County voted in the primary election. p ≤ .30 H a : More than 30% of the registered voters in Santa Clara County voted in the primary election. p > 30
A medical trial is conducted to test whether or not a new medicine reduces cholesterol by 25%. State the null and alternative hypotheses.
We want to test whether the mean GPA of students in American colleges is different from 2.0 (out of 4.0). The null and alternative hypotheses are: H 0 : μ = 2.0 H a : μ ≠ 2.0
We want to test whether the mean height of eighth graders is 66 inches. State the null and alternative hypotheses. Fill in the correct symbol (=, ≠, ≥, <, ≤, >) for the null and alternative hypotheses.
We want to test if college students take less than five years to graduate from college, on the average. The null and alternative hypotheses are: H 0 : μ ≥ 5 H a : μ < 5
We want to test if it takes fewer than 45 minutes to teach a lesson plan. State the null and alternative hypotheses. Fill in the correct symbol ( =, ≠, ≥, <, ≤, >) for the null and alternative hypotheses.
In an issue of U. S. News and World Report , an article on school standards stated that about half of all students in France, Germany, and Israel take advanced placement exams and a third pass. The same article stated that 6.6% of U.S. students take advanced placement exams and 4.4% pass. Test if the percentage of U.S. students who take advanced placement exams is more than 6.6%. State the null and alternative hypotheses. H 0 : p ≤ 0.066 H a : p > 0.066
On a state driver’s test, about 40% pass the test on the first try. We want to test if more than 40% pass on the first try. Fill in the correct symbol (=, ≠, ≥, <, ≤, >) for the null and alternative hypotheses.
Bring to class a newspaper, some news magazines, and some Internet articles . In groups, find articles from which your group can write null and alternative hypotheses. Discuss your hypotheses with the rest of the class.
This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.
Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.
Access for free at https://openstax.org/books/introductory-statistics-2e/pages/1-introduction
© Jul 18, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
COMMENTS
The alternative hypothesis is the complement to the null hypothesis. Null and alternative hypotheses are exhaustive, meaning that together they cover every possible outcome.
Learn how to formulate and test null and alternative hypotheses in statistics with examples and exercises from this LibreTexts course.
The actual test begins by considering two hypotheses. They are called the null hypothesis and the alternative hypothesis. These hypotheses contain oppos...
The alternative hypothesis and null hypothesis are types of conjectures used in statistical tests, which are formal methods of reaching conclusions or making judgments on the basis of data. In statistical hypothesis testing, the null hypothesis and alternative hypothesis are two mutually exclusive statements.
This tutorial provides a simple explanation of an alternative hypothesis in statistics, including several examples.
The actual test begins by considering two hypotheses. They are called the null hypothesis and the alternative hypothesis. These hypotheses contain opposing viewpoints.
The alternative hypothesis ( H_a H a or H_1 H 1) is the hypothesis that stands contrary to the null hypothesis. The alternative hypothesis represents the research hypothesis—what you as the statistician are trying to prove with your data.
Learn how the alternative hypothesis is defined in statistical tests and how it is used to choose between one-tailed and two-tailed tests.
The alternative hypothesis is one of two mutually exclusive hypotheses in a hypothesis test. The alternative hypothesis states that a population parameter does not equal a specified value.
The alternative hypothesis is simply the reverse of the null hypothesis, and there are three options, depending on where we expect the difference to lie. Thus, our alternative hypothesis is the mathematical way of stating our research question.
The null and alternative hypotheses are both statements about the population that you are studying. The null hypothesis is often stated as the assumption that there is no change, no difference between two groups, or no relationship between two variables. The alternative hypothesis, on the other hand, is the statement that there is a change, difference, or relationship. Questions Tips & Thanks
The actual test begins by considering two hypotheses. They are called the null hypothesis and the alternative hypothesis. These hypotheses contain opposing viewpoints. Since the null and alternative …
Here are the differences between the null and alternative hypotheses and how to distinguish between them.
The alternative hypothesis is a hypothesis used in significance testing which contains a strict inequality. A test of significance will result in either rejecting the null hypothesis (indicating ...
In statistical hypothesis testing, the alternative hypothesis is an important proposition in the hypothesis test. The goal of the hypothesis test is to demonstrate that in the given condition, there is sufficient evidence supporting the credibility of the alternative hypothesis instead of the default assumption made by the null hypothesis.
Null and alternative hypotheses are used in statistical hypothesis testing. The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.
Learn how to use t-tests and hypothesis testing to compare sample means and draw statistical inferences. A simple and intuitive explanation with examples.
Alternative hypothesis is used to contradict the null hypothesis, in statistics. Learn its definition, types and the difference between the two hypotheses along with examples at BYJU'S.
We show that the support recruited about a target hypothesis can depend on the other hypotheses under consideration. Results reveal that for a pair of competing hypotheses, one hypothesis (the target hypothesis) appears more competitive relative to the other when a dud-a hypothesis dominated by the target hypothesis-is present.
Interaction hypothesis Long's interaction hypothesis proposes that language acquisition is strongly facilitated by the use of the target language in interaction. Similarly to Krashen 's Input Hypothesis, the Interaction Hypothesis claims that comprehensible input is important for language learning.
Assume that the population standard deviation of 8 mL is unchanged. The appropriate null and alternative hypotheses for the volume 𝑉 in this situation are 𝐻 : 𝜇 = 240, 𝐻 ଵ : 𝜇 < 240 Level of Significance and 𝑧-Tests for Hypothesis Testing The level of significance, 𝛼, is an arbitrary cut off probability value that is considered extreme enough to reject the null ...
The actual test begins by considering two hypotheses. They are called the null hypothesis and the alternative hypothesis. These hypotheses contain opposing viewpoints.
hypothesis A proposed explanation for a fairly narrow set of phenomena, usually based on prior experience, scientific background knowledge, preliminary observations, and logic. To learn more, visit Science at multiple levels .