Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what hypothesis testing definition

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved August 10, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

  • Search Search Please fill out this field.

What Is Hypothesis Testing?

  • How It Works

4 Step Process

The bottom line.

  • Fundamental Analysis

Hypothesis Testing: 4 Steps and Example

what hypothesis testing definition

Hypothesis testing, sometimes called significance testing, is an act in statistics whereby an analyst tests an assumption regarding a population parameter. The methodology employed by the analyst depends on the nature of the data used and the reason for the analysis.

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data. Such data may come from a larger population or a data-generating process. The word "population" will be used for both of these cases in the following descriptions.

Key Takeaways

  • Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data.
  • The test provides evidence concerning the plausibility of the hypothesis, given the data.
  • Statistical analysts test a hypothesis by measuring and examining a random sample of the population being analyzed.
  • The four steps of hypothesis testing include stating the hypotheses, formulating an analysis plan, analyzing the sample data, and analyzing the result.

How Hypothesis Testing Works

In hypothesis testing, an  analyst  tests a statistical sample, intending to provide evidence on the plausibility of the null hypothesis. Statistical analysts measure and examine a random sample of the population being analyzed. All analysts use a random population sample to test two different hypotheses: the null hypothesis and the alternative hypothesis.

The null hypothesis is usually a hypothesis of equality between population parameters; e.g., a null hypothesis may state that the population mean return is equal to zero. The alternative hypothesis is effectively the opposite of a null hypothesis. Thus, they are mutually exclusive , and only one can be true. However, one of the two hypotheses will always be true.

The null hypothesis is a statement about a population parameter, such as the population mean, that is assumed to be true.

  • State the hypotheses.
  • Formulate an analysis plan, which outlines how the data will be evaluated.
  • Carry out the plan and analyze the sample data.
  • Analyze the results and either reject the null hypothesis, or state that the null hypothesis is plausible, given the data.

Example of Hypothesis Testing

If an individual wants to test that a penny has exactly a 50% chance of landing on heads, the null hypothesis would be that 50% is correct, and the alternative hypothesis would be that 50% is not correct. Mathematically, the null hypothesis is represented as Ho: P = 0.5. The alternative hypothesis is shown as "Ha" and is identical to the null hypothesis, except with the equal sign struck-through, meaning that it does not equal 50%.

A random sample of 100 coin flips is taken, and the null hypothesis is tested. If it is found that the 100 coin flips were distributed as 40 heads and 60 tails, the analyst would assume that a penny does not have a 50% chance of landing on heads and would reject the null hypothesis and accept the alternative hypothesis.

If there were 48 heads and 52 tails, then it is plausible that the coin could be fair and still produce such a result. In cases such as this where the null hypothesis is "accepted," the analyst states that the difference between the expected results (50 heads and 50 tails) and the observed results (48 heads and 52 tails) is "explainable by chance alone."

When Did Hypothesis Testing Begin?

Some statisticians attribute the first hypothesis tests to satirical writer John Arbuthnot in 1710, who studied male and female births in England after observing that in nearly every year, male births exceeded female births by a slight proportion. Arbuthnot calculated that the probability of this happening by chance was small, and therefore it was due to “divine providence.”

What are the Benefits of Hypothesis Testing?

Hypothesis testing helps assess the accuracy of new ideas or theories by testing them against data. This allows researchers to determine whether the evidence supports their hypothesis, helping to avoid false claims and conclusions. Hypothesis testing also provides a framework for decision-making based on data rather than personal opinions or biases. By relying on statistical analysis, hypothesis testing helps to reduce the effects of chance and confounding variables, providing a robust framework for making informed conclusions.

What are the Limitations of Hypothesis Testing?

Hypothesis testing relies exclusively on data and doesn’t provide a comprehensive understanding of the subject being studied. Additionally, the accuracy of the results depends on the quality of the available data and the statistical methods used. Inaccurate data or inappropriate hypothesis formulation may lead to incorrect conclusions or failed tests. Hypothesis testing can also lead to errors, such as analysts either accepting or rejecting a null hypothesis when they shouldn’t have. These errors may result in false conclusions or missed opportunities to identify significant patterns or relationships in the data.

Hypothesis testing refers to a statistical process that helps researchers determine the reliability of a study. By using a well-formulated hypothesis and set of statistical tests, individuals or businesses can make inferences about the population that they are studying and draw conclusions based on the data presented. All hypothesis testing methods have the same four-step process, which includes stating the hypotheses, formulating an analysis plan, analyzing the sample data, and analyzing the result.

Sage. " Introduction to Hypothesis Testing ," Page 4.

Elder Research. " Who Invented the Null Hypothesis? "

Formplus. " Hypothesis Testing: Definition, Uses, Limitations and Examples ."

what hypothesis testing definition

  • Terms of Service
  • Editorial Policy
  • Privacy Policy

Hypothesis Testing

Hypothesis testing is a tool for making statistical inferences about the population data. It is an analysis tool that tests assumptions and determines how likely something is within a given standard of accuracy. Hypothesis testing provides a way to verify whether the results of an experiment are valid.

A null hypothesis and an alternative hypothesis are set up before performing the hypothesis testing. This helps to arrive at a conclusion regarding the sample obtained from the population. In this article, we will learn more about hypothesis testing, its types, steps to perform the testing, and associated examples.

1.
2.
3.
4.
5.
6.
7.
8.

What is Hypothesis Testing in Statistics?

Hypothesis testing uses sample data from the population to draw useful conclusions regarding the population probability distribution . It tests an assumption made about the data using different types of hypothesis testing methodologies. The hypothesis testing results in either rejecting or not rejecting the null hypothesis.

Hypothesis Testing Definition

Hypothesis testing can be defined as a statistical tool that is used to identify if the results of an experiment are meaningful or not. It involves setting up a null hypothesis and an alternative hypothesis. These two hypotheses will always be mutually exclusive. This means that if the null hypothesis is true then the alternative hypothesis is false and vice versa. An example of hypothesis testing is setting up a test to check if a new medicine works on a disease in a more efficient manner.

Null Hypothesis

The null hypothesis is a concise mathematical statement that is used to indicate that there is no difference between two possibilities. In other words, there is no difference between certain characteristics of data. This hypothesis assumes that the outcomes of an experiment are based on chance alone. It is denoted as \(H_{0}\). Hypothesis testing is used to conclude if the null hypothesis can be rejected or not. Suppose an experiment is conducted to check if girls are shorter than boys at the age of 5. The null hypothesis will say that they are the same height.

Alternative Hypothesis

The alternative hypothesis is an alternative to the null hypothesis. It is used to show that the observations of an experiment are due to some real effect. It indicates that there is a statistical significance between two possible outcomes and can be denoted as \(H_{1}\) or \(H_{a}\). For the above-mentioned example, the alternative hypothesis would be that girls are shorter than boys at the age of 5.

Hypothesis Testing P Value

In hypothesis testing, the p value is used to indicate whether the results obtained after conducting a test are statistically significant or not. It also indicates the probability of making an error in rejecting or not rejecting the null hypothesis.This value is always a number between 0 and 1. The p value is compared to an alpha level, \(\alpha\) or significance level. The alpha level can be defined as the acceptable risk of incorrectly rejecting the null hypothesis. The alpha level is usually chosen between 1% to 5%.

Hypothesis Testing Critical region

All sets of values that lead to rejecting the null hypothesis lie in the critical region. Furthermore, the value that separates the critical region from the non-critical region is known as the critical value.

Hypothesis Testing Formula

Depending upon the type of data available and the size, different types of hypothesis testing are used to determine whether the null hypothesis can be rejected or not. The hypothesis testing formula for some important test statistics are given below:

  • z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\). \(\overline{x}\) is the sample mean, \(\mu\) is the population mean, \(\sigma\) is the population standard deviation and n is the size of the sample.
  • t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\). s is the sample standard deviation.
  • \(\chi ^{2} = \sum \frac{(O_{i}-E_{i})^{2}}{E_{i}}\). \(O_{i}\) is the observed value and \(E_{i}\) is the expected value.

We will learn more about these test statistics in the upcoming section.

Types of Hypothesis Testing

Selecting the correct test for performing hypothesis testing can be confusing. These tests are used to determine a test statistic on the basis of which the null hypothesis can either be rejected or not rejected. Some of the important tests used for hypothesis testing are given below.

Hypothesis Testing Z Test

A z test is a way of hypothesis testing that is used for a large sample size (n ≥ 30). It is used to determine whether there is a difference between the population mean and the sample mean when the population standard deviation is known. It can also be used to compare the mean of two samples. It is used to compute the z test statistic. The formulas are given as follows:

  • One sample: z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).
  • Two samples: z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing t Test

The t test is another method of hypothesis testing that is used for a small sample size (n < 30). It is also used to compare the sample mean and population mean. However, the population standard deviation is not known. Instead, the sample standard deviation is known. The mean of two samples can also be compared using the t test.

  • One sample: t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\).
  • Two samples: t = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing Chi Square

The Chi square test is a hypothesis testing method that is used to check whether the variables in a population are independent or not. It is used when the test statistic is chi-squared distributed.

One Tailed Hypothesis Testing

One tailed hypothesis testing is done when the rejection region is only in one direction. It can also be known as directional hypothesis testing because the effects can be tested in one direction only. This type of testing is further classified into the right tailed test and left tailed test.

Right Tailed Hypothesis Testing

The right tail test is also known as the upper tail test. This test is used to check whether the population parameter is greater than some value. The null and alternative hypotheses for this test are given as follows:

\(H_{0}\): The population parameter is ≤ some value

\(H_{1}\): The population parameter is > some value.

If the test statistic has a greater value than the critical value then the null hypothesis is rejected

Right Tail Hypothesis Testing

Left Tailed Hypothesis Testing

The left tail test is also known as the lower tail test. It is used to check whether the population parameter is less than some value. The hypotheses for this hypothesis testing can be written as follows:

\(H_{0}\): The population parameter is ≥ some value

\(H_{1}\): The population parameter is < some value.

The null hypothesis is rejected if the test statistic has a value lesser than the critical value.

Left Tail Hypothesis Testing

Two Tailed Hypothesis Testing

In this hypothesis testing method, the critical region lies on both sides of the sampling distribution. It is also known as a non - directional hypothesis testing method. The two-tailed test is used when it needs to be determined if the population parameter is assumed to be different than some value. The hypotheses can be set up as follows:

\(H_{0}\): the population parameter = some value

\(H_{1}\): the population parameter ≠ some value

The null hypothesis is rejected if the test statistic has a value that is not equal to the critical value.

Two Tail Hypothesis Testing

Hypothesis Testing Steps

Hypothesis testing can be easily performed in five simple steps. The most important step is to correctly set up the hypotheses and identify the right method for hypothesis testing. The basic steps to perform hypothesis testing are as follows:

  • Step 1: Set up the null hypothesis by correctly identifying whether it is the left-tailed, right-tailed, or two-tailed hypothesis testing.
  • Step 2: Set up the alternative hypothesis.
  • Step 3: Choose the correct significance level, \(\alpha\), and find the critical value.
  • Step 4: Calculate the correct test statistic (z, t or \(\chi\)) and p-value.
  • Step 5: Compare the test statistic with the critical value or compare the p-value with \(\alpha\) to arrive at a conclusion. In other words, decide if the null hypothesis is to be rejected or not.

Hypothesis Testing Example

The best way to solve a problem on hypothesis testing is by applying the 5 steps mentioned in the previous section. Suppose a researcher claims that the mean average weight of men is greater than 100kgs with a standard deviation of 15kgs. 30 men are chosen with an average weight of 112.5 Kgs. Using hypothesis testing, check if there is enough evidence to support the researcher's claim. The confidence interval is given as 95%.

Step 1: This is an example of a right-tailed test. Set up the null hypothesis as \(H_{0}\): \(\mu\) = 100.

Step 2: The alternative hypothesis is given by \(H_{1}\): \(\mu\) > 100.

Step 3: As this is a one-tailed test, \(\alpha\) = 100% - 95% = 5%. This can be used to determine the critical value.

1 - \(\alpha\) = 1 - 0.05 = 0.95

0.95 gives the required area under the curve. Now using a normal distribution table, the area 0.95 is at z = 1.645. A similar process can be followed for a t-test. The only additional requirement is to calculate the degrees of freedom given by n - 1.

Step 4: Calculate the z test statistic. This is because the sample size is 30. Furthermore, the sample and population means are known along with the standard deviation.

z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).

\(\mu\) = 100, \(\overline{x}\) = 112.5, n = 30, \(\sigma\) = 15

z = \(\frac{112.5-100}{\frac{15}{\sqrt{30}}}\) = 4.56

Step 5: Conclusion. As 4.56 > 1.645 thus, the null hypothesis can be rejected.

Hypothesis Testing and Confidence Intervals

Confidence intervals form an important part of hypothesis testing. This is because the alpha level can be determined from a given confidence interval. Suppose a confidence interval is given as 95%. Subtract the confidence interval from 100%. This gives 100 - 95 = 5% or 0.05. This is the alpha value of a one-tailed hypothesis testing. To obtain the alpha value for a two-tailed hypothesis testing, divide this value by 2. This gives 0.05 / 2 = 0.025.

Related Articles:

  • Probability and Statistics
  • Data Handling

Important Notes on Hypothesis Testing

  • Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant.
  • It involves the setting up of a null hypothesis and an alternate hypothesis.
  • There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
  • Hypothesis testing can be classified as right tail, left tail, and two tail tests.

Examples on Hypothesis Testing

  • Example 1: The average weight of a dumbbell in a gym is 90lbs. However, a physical trainer believes that the average weight might be higher. A random sample of 5 dumbbells with an average weight of 110lbs and a standard deviation of 18lbs. Using hypothesis testing check if the physical trainer's claim can be supported for a 95% confidence level. Solution: As the sample size is lesser than 30, the t-test is used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) > 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 5, s = 18. \(\alpha\) = 0.05 Using the t-distribution table, the critical value is 2.132 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = 2.484 As 2.484 > 2.132, the null hypothesis is rejected. Answer: The average weight of the dumbbells may be greater than 90lbs
  • Example 2: The average score on a test is 80 with a standard deviation of 10. With a new teaching curriculum introduced it is believed that this score will change. On random testing, the score of 38 students, the mean was found to be 88. With a 0.05 significance level, is there any evidence to support this claim? Solution: This is an example of two-tail hypothesis testing. The z test will be used. \(H_{0}\): \(\mu\) = 80, \(H_{1}\): \(\mu\) ≠ 80 \(\overline{x}\) = 88, \(\mu\) = 80, n = 36, \(\sigma\) = 10. \(\alpha\) = 0.05 / 2 = 0.025 The critical value using the normal distribution table is 1.96 z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) z = \(\frac{88-80}{\frac{10}{\sqrt{36}}}\) = 4.8 As 4.8 > 1.96, the null hypothesis is rejected. Answer: There is a difference in the scores after the new curriculum was introduced.
  • Example 3: The average score of a class is 90. However, a teacher believes that the average score might be lower. The scores of 6 students were randomly measured. The mean was 82 with a standard deviation of 18. With a 0.05 significance level use hypothesis testing to check if this claim is true. Solution: The t test will be used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) < 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 6, s = 18 The critical value from the t table is -2.015 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = \(\frac{82-90}{\frac{18}{\sqrt{6}}}\) t = -1.088 As -1.088 > -2.015, we fail to reject the null hypothesis. Answer: There is not enough evidence to support the claim.

go to slide go to slide go to slide

what hypothesis testing definition

Book a Free Trial Class

FAQs on Hypothesis Testing

What is hypothesis testing.

Hypothesis testing in statistics is a tool that is used to make inferences about the population data. It is also used to check if the results of an experiment are valid.

What is the z Test in Hypothesis Testing?

The z test in hypothesis testing is used to find the z test statistic for normally distributed data . The z test is used when the standard deviation of the population is known and the sample size is greater than or equal to 30.

What is the t Test in Hypothesis Testing?

The t test in hypothesis testing is used when the data follows a student t distribution . It is used when the sample size is less than 30 and standard deviation of the population is not known.

What is the formula for z test in Hypothesis Testing?

The formula for a one sample z test in hypothesis testing is z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) and for two samples is z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

What is the p Value in Hypothesis Testing?

The p value helps to determine if the test results are statistically significant or not. In hypothesis testing, the null hypothesis can either be rejected or not rejected based on the comparison between the p value and the alpha level.

What is One Tail Hypothesis Testing?

When the rejection region is only on one side of the distribution curve then it is known as one tail hypothesis testing. The right tail test and the left tail test are two types of directional hypothesis testing.

What is the Alpha Level in Two Tail Hypothesis Testing?

To get the alpha level in a two tail hypothesis testing divide \(\alpha\) by 2. This is done as there are two rejection regions in the curve.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Statistical Hypothesis Testing Overview

By Jim Frost 59 Comments

In this blog post, I explain why you need to use statistical hypothesis testing and help you navigate the essential terminology. Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables.

This post provides an overview of statistical hypothesis testing. If you need to perform hypothesis tests, consider getting my book, Hypothesis Testing: An Intuitive Guide .

Why You Should Perform Statistical Hypothesis Testing

Graph that displays mean drug scores by group. Use hypothesis testing to determine whether the difference between the means are statistically significant.

Hypothesis testing is a form of inferential statistics that allows us to draw conclusions about an entire population based on a representative sample. You gain tremendous benefits by working with a sample. In most cases, it is simply impossible to observe the entire population to understand its properties. The only alternative is to collect a random sample and then use statistics to analyze it.

While samples are much more practical and less expensive to work with, there are trade-offs. When you estimate the properties of a population from a sample, the sample statistics are unlikely to equal the actual population value exactly.  For instance, your sample mean is unlikely to equal the population mean. The difference between the sample statistic and the population value is the sample error.

Differences that researchers observe in samples might be due to sampling error rather than representing a true effect at the population level. If sampling error causes the observed difference, the next time someone performs the same experiment the results might be different. Hypothesis testing incorporates estimates of the sampling error to help you make the correct decision. Learn more about Sampling Error .

For example, if you are studying the proportion of defects produced by two manufacturing methods, any difference you observe between the two sample proportions might be sample error rather than a true difference. If the difference does not exist at the population level, you won’t obtain the benefits that you expect based on the sample statistics. That can be a costly mistake!

Let’s cover some basic hypothesis testing terms that you need to know.

Background information : Difference between Descriptive and Inferential Statistics and Populations, Parameters, and Samples in Inferential Statistics

Hypothesis Testing

Hypothesis testing is a statistical analysis that uses sample data to assess two mutually exclusive theories about the properties of a population. Statisticians call these theories the null hypothesis and the alternative hypothesis. A hypothesis test assesses your sample statistic and factors in an estimate of the sample error to determine which hypothesis the data support.

When you can reject the null hypothesis, the results are statistically significant, and your data support the theory that an effect exists at the population level.

The effect is the difference between the population value and the null hypothesis value. The effect is also known as population effect or the difference. For example, the mean difference between the health outcome for a treatment group and a control group is the effect.

Typically, you do not know the size of the actual effect. However, you can use a hypothesis test to help you determine whether an effect exists and to estimate its size. Hypothesis tests convert your sample effect into a test statistic, which it evaluates for statistical significance. Learn more about Test Statistics .

An effect can be statistically significant, but that doesn’t necessarily indicate that it is important in a real-world, practical sense. For more information, read my post about Statistical vs. Practical Significance .

Null Hypothesis

The null hypothesis is one of two mutually exclusive theories about the properties of the population in hypothesis testing. Typically, the null hypothesis states that there is no effect (i.e., the effect size equals zero). The null is often signified by H 0 .

In all hypothesis testing, the researchers are testing an effect of some sort. The effect can be the effectiveness of a new vaccination, the durability of a new product, the proportion of defect in a manufacturing process, and so on. There is some benefit or difference that the researchers hope to identify.

However, it’s possible that there is no effect or no difference between the experimental groups. In statistics, we call this lack of an effect the null hypothesis. Therefore, if you can reject the null, you can favor the alternative hypothesis, which states that the effect exists (doesn’t equal zero) at the population level.

You can think of the null as the default theory that requires sufficiently strong evidence against in order to reject it.

For example, in a 2-sample t-test, the null often states that the difference between the two means equals zero.

When you can reject the null hypothesis, your results are statistically significant. Learn more about Statistical Significance: Definition & Meaning .

Related post : Understanding the Null Hypothesis in More Detail

Alternative Hypothesis

The alternative hypothesis is the other theory about the properties of the population in hypothesis testing. Typically, the alternative hypothesis states that a population parameter does not equal the null hypothesis value. In other words, there is a non-zero effect. If your sample contains sufficient evidence, you can reject the null and favor the alternative hypothesis. The alternative is often identified with H 1 or H A .

For example, in a 2-sample t-test, the alternative often states that the difference between the two means does not equal zero.

You can specify either a one- or two-tailed alternative hypothesis:

If you perform a two-tailed hypothesis test, the alternative states that the population parameter does not equal the null value. For example, when the alternative hypothesis is H A : μ ≠ 0, the test can detect differences both greater than and less than the null value.

A one-tailed alternative has more power to detect an effect but it can test for a difference in only one direction. For example, H A : μ > 0 can only test for differences that are greater than zero.

Related posts : Understanding T-tests and One-Tailed and Two-Tailed Hypothesis Tests Explained

Image of a P for the p-value in hypothesis testing.

P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is correct. In simpler terms, p-values tell you how strongly your sample data contradict the null. Lower p-values represent stronger evidence against the null. You use P-values in conjunction with the significance level to determine whether your data favor the null or alternative hypothesis.

Related post : Interpreting P-values Correctly

Significance Level (Alpha)

image of the alpha symbol for hypothesis testing.

For instance, a significance level of 0.05 signifies a 5% risk of deciding that an effect exists when it does not exist.

Use p-values and significance levels together to help you determine which hypothesis the data support. If the p-value is less than your significance level, you can reject the null and conclude that the effect is statistically significant. In other words, the evidence in your sample is strong enough to be able to reject the null hypothesis at the population level.

Related posts : Graphical Approach to Significance Levels and P-values and Conceptual Approach to Understanding Significance Levels

Types of Errors in Hypothesis Testing

Statistical hypothesis tests are not 100% accurate because they use a random sample to draw conclusions about entire populations. There are two types of errors related to drawing an incorrect conclusion.

  • False positives: You reject a null that is true. Statisticians call this a Type I error . The Type I error rate equals your significance level or alpha (α).
  • False negatives: You fail to reject a null that is false. Statisticians call this a Type II error. Generally, you do not know the Type II error rate. However, it is a larger risk when you have a small sample size , noisy data, or a small effect size. The type II error rate is also known as beta (β).

Statistical power is the probability that a hypothesis test correctly infers that a sample effect exists in the population. In other words, the test correctly rejects a false null hypothesis. Consequently, power is inversely related to a Type II error. Power = 1 – β. Learn more about Power in Statistics .

Related posts : Types of Errors in Hypothesis Testing and Estimating a Good Sample Size for Your Study Using Power Analysis

Which Type of Hypothesis Test is Right for You?

There are many different types of procedures you can use. The correct choice depends on your research goals and the data you collect. Do you need to understand the mean or the differences between means? Or, perhaps you need to assess proportions. You can even use hypothesis testing to determine whether the relationships between variables are statistically significant.

To choose the proper statistical procedure, you’ll need to assess your study objectives and collect the correct type of data . This background research is necessary before you begin a study.

Related Post : Hypothesis Tests for Continuous, Binary, and Count Data

Statistical tests are crucial when you want to use sample data to make conclusions about a population because these tests account for sample error. Using significance levels and p-values to determine when to reject the null hypothesis improves the probability that you will draw the correct conclusion.

To see an alternative approach to these traditional hypothesis testing methods, learn about bootstrapping in statistics !

If you want to see examples of hypothesis testing in action, I recommend the following posts that I have written:

  • How Effective Are Flu Shots? This example shows how you can use statistics to test proportions.
  • Fatality Rates in Star Trek . This example shows how to use hypothesis testing with categorical data.
  • Busting Myths About the Battle of the Sexes . A fun example based on a Mythbusters episode that assess continuous data using several different tests.
  • Are Yawns Contagious? Another fun example inspired by a Mythbusters episode.

Share this:

what hypothesis testing definition

Reader Interactions

' src=

January 14, 2024 at 8:43 am

Hello professor Jim, how are you doing! Pls. What are the properties of a population and their examples? Thanks for your time and understanding.

' src=

January 14, 2024 at 12:57 pm

Please read my post about Populations vs. Samples for more information and examples.

Also, please note there is a search bar in the upper-right margin of my website. Use that to search for topics.

' src=

July 5, 2023 at 7:05 am

Hello, I have a question as I read your post. You say in p-values section

“P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is correct. In simpler terms, p-values tell you how strongly your sample data contradict the null. Lower p-values represent stronger evidence against the null.”

But according to your definition of effect, the null states that an effect does not exist, correct? So what I assume you want to say is that “P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is **incorrect**.”

July 6, 2023 at 5:18 am

Hi Shrinivas,

The correct definition of p-value is that it is a probability that exists in the context of a true null hypothesis. So, the quotation is correct in stating “if the null hypothesis is correct.”

Essentially, the p-value tells you the likelihood of your observed results (or more extreme) if the null hypothesis is true. It gives you an idea of whether your results are surprising or unusual if there is no effect.

Hence, with sufficiently low p-values, you reject the null hypothesis because it’s telling you that your sample results were unlikely to have occurred if there was no effect in the population.

I hope that helps make it more clear. If not, let me know I’ll attempt to clarify!

' src=

May 8, 2023 at 12:47 am

Thanks a lot Ny best regards

May 7, 2023 at 11:15 pm

Hi Jim Can you tell me something about size effect? Thanks

May 8, 2023 at 12:29 am

Here’s a post that I’ve written about Effect Sizes that will hopefully tell you what you need to know. Please read that. Then, if you have any more specific questions about effect sizes, please post them there. Thanks!

' src=

January 7, 2023 at 4:19 pm

Hi Jim, I have only read two pages so far but I am really amazed because in few paragraphs you made me clearly understand the concepts of months of courses I received in biostatistics! Thanks so much for this work you have done it helps a lot!

January 10, 2023 at 3:25 pm

Thanks so much!

' src=

June 17, 2021 at 1:45 pm

Can you help in the following question: Rocinante36 is priced at ₹7 lakh and has been designed to deliver a mileage of 22 km/litre and a top speed of 140 km/hr. Formulate the null and alternative hypotheses for mileage and top speed to check whether the new models are performing as per the desired design specifications.

' src=

April 19, 2021 at 1:51 pm

Its indeed great to read your work statistics.

I have a doubt regarding the one sample t-test. So as per your book on hypothesis testing with reference to page no 45, you have mentioned the difference between “the sample mean and the hypothesised mean is statistically significant”. So as per my understanding it should be quoted like “the difference between the population mean and the hypothesised mean is statistically significant”. The catch here is the hypothesised mean represents the sample mean.

Please help me understand this.

Regards Rajat

April 19, 2021 at 3:46 pm

Thanks for buying my book. I’m so glad it’s been helpful!

The test is performed on the sample but the results apply to the population. Hence, if the difference between the sample mean (observed in your study) and the hypothesized mean is statistically significant, that suggests that population does not equal the hypothesized mean.

For one sample tests, the hypothesized mean is not the sample mean. It is a mean that you want to use for the test value. It usually represents a value that is important to your research. In other words, it’s a value that you pick for some theoretical/practical reasons. You pick it because you want to determine whether the population mean is different from that particular value.

I hope that helps!

' src=

November 5, 2020 at 6:24 am

Jim, you are such a magnificent statistician/economist/econometrician/data scientist etc whatever profession. Your work inspires and simplifies the lives of so many researchers around the world. I truly admire you and your work. I will buy a copy of each book you have on statistics or econometrics. Keep doing the good work. Remain ever blessed

November 6, 2020 at 9:47 pm

Hi Renatus,

Thanks so much for you very kind comments. You made my day!! I’m so glad that my website has been helpful. And, thanks so much for supporting my books! 🙂

' src=

November 2, 2020 at 9:32 pm

Hi Jim, I hope you are aware of 2019 American Statistical Association’s official statement on Statistical Significance: https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913 In case you do not bother reading the full article, may I quote you the core message here: “We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way."

With best wishes,

November 3, 2020 at 2:09 am

I’m definitely aware of the debate surrounding how to use p-values most effectively. However, I need to correct you on one point. The link you provide is NOT a statement by the American Statistical Association. It is an editorial by several authors.

There is considerable debate over this issue. There are problems with p-values. However, as the authors state themselves, much of the problem is over people’s mindsets about how to use p-values and their incorrect interpretations about what statistical significance does and does not mean.

If you were to read my website more thoroughly, you’d be aware that I share many of their concerns and I address them in multiple posts. One of the authors’ key points is the need to be thoughtful and conduct thoughtful research and analysis. I emphasize this aspect in multiple posts on this topic. I’ll ask you to read the following three because they all address some of the authors’ concerns and suggestions. But you might run across others to read as well.

Five Tips for Using P-values to Avoid Being Misled How to Interpret P-values Correctly P-values and the Reproducibility of Experimental Results

' src=

September 24, 2020 at 11:52 pm

HI Jim, i just want you to know that you made explanation for Statistics so simple! I should say lesser and fewer words that reduce the complexity. All the best! 🙂

September 25, 2020 at 1:03 am

Thanks, Rene! Your kind words mean a lot to me! I’m so glad it has been helpful!

' src=

September 23, 2020 at 2:21 am

Honestly, I never understood stats during my entire M.Ed course and was another nightmare for me. But how easily you have explained each concept, I have understood stats way beyond my imagination. Thank you so much for helping ignorant research scholars like us. Looking forward to get hardcopy of your book. Kindly tell is it available through flipkart?

September 24, 2020 at 11:14 pm

I’m so happy to hear that my website has been helpful!

I checked on flipkart and it appears like my books are not available there. I’m never exactly sure where they’re available due to the vagaries of different distribution channels. They are available on Amazon in India.

Introduction to Statistics: An Intuitive Guide (Amazon IN) Hypothesis Testing: An Intuitive Guide (Amazon IN)

' src=

July 26, 2020 at 11:57 am

Dear Jim I am a teacher from India . I don’t have any background in statistics, and still I should tell that in a single read I can follow your explanations . I take my entire biostatistics class for botany graduates with your explanations. Thanks a lot. May I know how I can avail your books in India

July 28, 2020 at 12:31 am

Right now my books are only available as ebooks from my website. However, soon I’ll have some exciting news about other ways to obtain it. Stay tuned! I’ll announce it on my email list. If you’re not already on it, you can sign up using the form that is in the right margin of my website.

' src=

June 22, 2020 at 2:02 pm

Also can you please let me if this book covers topics like EDA and principal component analysis?

June 22, 2020 at 2:07 pm

This book doesn’t cover principal components analysis. Although, I wouldn’t really classify that as a hypothesis test. In the future, I might write a multivariate analysis book that would cover this and others. But, that’s well down the road.

My Introduction to Statistics covers EDA. That’s the largely graphical look at your data that you often do prior to hypothesis testing. The Introduction book perfectly leads right into the Hypothesis Testing book.

June 22, 2020 at 1:45 pm

Thanks for the detailed explanation. It does clear my doubts. I saw that your book related to hypothesis testing has the topics that I am studying currently. I am looking forward to purchasing it.

Regards, Take Care

June 19, 2020 at 1:03 pm

For this particular article I did not understand a couple of statements and it would great if you could help: 1)”If sample error causes the observed difference, the next time someone performs the same experiment the results might be different.” 2)”If the difference does not exist at the population level, you won’t obtain the benefits that you expect based on the sample statistics.”

I discovered your articles by chance and now I keep coming back to read & understand statistical concepts. These articles are very informative & easy to digest. Thanks for the simplifying things.

June 20, 2020 at 9:53 pm

I’m so happy to hear that you’ve found my website to be helpful!

To answer your questions, keep in mind that a central tenant of inferential statistics is that the random sample that a study drew was only one of an infinite number of possible it could’ve drawn. Each random sample produces different results. Most results will cluster around the population value assuming they used good methodology. However, random sampling error always exists and makes it so that population estimates from a sample almost never exactly equal the correct population value.

So, imagine that we’re studying a medication and comparing the treatment and control groups. Suppose that the medicine is truly not effect and that the population difference between the treatment and control group is zero (i.e., no difference.) Despite the true difference being zero, most sample estimates will show some degree of either a positive or negative effect thanks to random sampling error. So, just because a study has an observed difference does not mean that a difference exists at the population level. So, on to your questions:

1. If the observed difference is just random error, then it makes sense that if you collected another random sample, the difference could change. It could change from negative to positive, positive to negative, more extreme, less extreme, etc. However, if the difference exists at the population level, most random samples drawn from the population will reflect that difference. If the medicine has an effect, most random samples will reflect that fact and not bounce around on both sides of zero as much.

2. This is closely related to the previous answer. If there is no difference at the population level, but say you approve the medicine because of the observed effects in a sample. Even though your random sample showed an effect (which was really random error), that effect doesn’t exist. So, when you start using it on a larger scale, people won’t benefit from the medicine. That’s why it’s important to separate out what is easily explained by random error versus what is not easily explained by it.

I think reading my post about how hypothesis tests work will help clarify this process. Also, in about 24 hours (as I write this), I’ll be releasing my new ebook about Hypothesis Testing!

' src=

May 29, 2020 at 5:23 am

Hi Jim, I really enjoy your blog. Can you please link me on your blog where you discuss about Subgroup analysis and how it is done? I need to use non parametric and parametric statistical methods for my work and also do subgroup analysis in order to identify potential groups of patients that may benefit more from using a treatment than other groups.

May 29, 2020 at 2:12 pm

Hi, I don’t have a specific article about subgroup analysis. However, subgroup analysis is just the dividing up of a larger sample into subgroups and then analyzing those subgroups separately. You can use the various analyses I write about on the subgroups.

Alternatively, you can include the subgroups in regression analysis as an indicator variable and include that variable as a main effect and an interaction effect to see how the relationships vary by subgroup without needing to subdivide your data. I write about that approach in my article about comparing regression lines . This approach is my preferred approach when possible.

' src=

April 19, 2020 at 7:58 am

sir is confidence interval is a part of estimation?

' src=

April 17, 2020 at 3:36 pm

Sir can u plz briefly explain alternatives of hypothesis testing? I m unable to find the answer

April 18, 2020 at 1:22 am

Assuming you want to draw conclusions about populations by using samples (i.e., inferential statistics ), you can use confidence intervals and bootstrap methods as alternatives to the traditional hypothesis testing methods.

' src=

March 9, 2020 at 10:01 pm

Hi JIm, could you please help with activities that can best teach concepts of hypothesis testing through simulation, Also, do you have any question set that would enhance students intuition why learning hypothesis testing as a topic in introductory statistics. Thanks.

' src=

March 5, 2020 at 3:48 pm

Hi Jim, I’m studying multiple hypothesis testing & was wondering if you had any material that would be relevant. I’m more trying to understand how testing multiple samples simultaneously affects your results & more on the Bonferroni Correction

March 5, 2020 at 4:05 pm

I write about multiple comparisons (aka post hoc tests) in the ANOVA context . I don’t talk about Bonferroni Corrections specifically but I cover related types of corrections. I’m not sure if that exactly addresses what you want to know but is probably the closest I have already written. I hope it helps!

' src=

January 14, 2020 at 9:03 pm

Thank you! Have a great day/evening.

January 13, 2020 at 7:10 pm

Any help would be greatly appreciated. What is the difference between The Hypothesis Test and The Statistical Test of Hypothesis?

January 14, 2020 at 11:02 am

They sound like the same thing to me. Unless this is specialized terminology for a particular field or the author was intending something specific, I’d guess they’re one and the same.

' src=

April 1, 2019 at 10:00 am

so these are the only two forms of Hypothesis used in statistical testing?

April 1, 2019 at 10:02 am

Are you referring to the null and alternative hypothesis? If so, yes, that’s those are the standard hypotheses in a statistical hypothesis test.

April 1, 2019 at 9:57 am

year very insightful post, thanks for the write up

' src=

October 27, 2018 at 11:09 pm

hi there, am upcoming statistician, out of all blogs that i have read, i have found this one more useful as long as my problem is concerned. thanks so much

October 27, 2018 at 11:14 pm

Hi Stano, you’re very welcome! Thanks for your kind words. They mean a lot! I’m happy to hear that my posts were able to help you. I’m sure you will be a fantastic statistician. Best of luck with your studies!

' src=

October 26, 2018 at 11:39 am

Dear Jim, thank you very much for your explanations! I have a question. Can I use t-test to compare two samples in case each of them have right bias?

October 26, 2018 at 12:00 pm

Hi Tetyana,

You’re very welcome!

The term “right bias” is not a standard term. Do you by chance mean right skewed distributions? In other words, if you plot the distribution for each group on a histogram they have longer right tails? These are not the symmetrical bell-shape curves of the normal distribution.

If that’s the case, yes you can as long as you exceed a specific sample size within each group. I include a table that contains these sample size requirements in my post about nonparametric vs parametric analyses .

Bias in statistics refers to cases where an estimate of a value is systematically higher or lower than the true value. If this is the case, you might be able to use t-tests, but you’d need to be sure to understand the nature of the bias so you would understand what the results are really indicating.

I hope this helps!

' src=

April 2, 2018 at 7:28 am

Simple and upto the point 👍 Thank you so much.

April 2, 2018 at 11:11 am

Hi Kalpana, thanks! And I’m glad it was helpful!

' src=

March 26, 2018 at 8:41 am

Am I correct if I say: Alpha – Probability of wrongly rejection of null hypothesis P-value – Probability of wrongly acceptance of null hypothesis

March 28, 2018 at 3:14 pm

You’re correct about alpha. Alpha is the probability of rejecting the null hypothesis when the null is true.

Unfortunately, your definition of the p-value is a bit off. The p-value has a fairly convoluted definition. It is the probability of obtaining the effect observed in a sample, or more extreme, if the null hypothesis is true. The p-value does NOT indicate the probability that either the null or alternative is true or false. Although, those are very common misinterpretations. To learn more, read my post about how to interpret p-values correctly .

' src=

March 2, 2018 at 6:10 pm

I recently started reading your blog and it is very helpful to understand each concept of statistical tests in easy way with some good examples. Also, I recommend to other people go through all these blogs which you posted. Specially for those people who have not statistical background and they are facing to many problems while studying statistical analysis.

Thank you for your such good blogs.

March 3, 2018 at 10:12 pm

Hi Amit, I’m so glad that my blog posts have been helpful for you! It means a lot to me that you took the time to write such a nice comment! Also, thanks for recommending by blog to others! I try really hard to write posts about statistics that are easy to understand.

' src=

January 17, 2018 at 7:03 am

I recently started reading your blog and I find it very interesting. I am learning statistics by my own, and I generally do many google search to understand the concepts. So this blog is quite helpful for me, as it have most of the content which I am looking for.

January 17, 2018 at 3:56 pm

Hi Shashank, thank you! And, I’m very glad to hear that my blog is helpful!

' src=

January 2, 2018 at 2:28 pm

thank u very much sir.

January 2, 2018 at 2:36 pm

You’re very welcome, Hiral!

' src=

November 21, 2017 at 12:43 pm

Thank u so much sir….your posts always helps me to be a #statistician

November 21, 2017 at 2:40 pm

Hi Sachin, you’re very welcome! I’m happy that you find my posts to be helpful!

' src=

November 19, 2017 at 8:22 pm

great post as usual, but it would be nice to see an example.

November 19, 2017 at 8:27 pm

Thank you! At the end of this post, I have links to four other posts that show examples of hypothesis tests in action. You’ll find what you’re looking for in those posts!

Comments and Questions Cancel reply

Reset password New user? Sign up

Existing user? Log in

Hypothesis Testing

Already have an account? Log in here.

A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators . In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis true for the entire population.

The test considers two hypotheses: the null hypothesis , which is a statement meant to be tested, usually something like "there is no effect" with the intention of proving this false, and the alternate hypothesis , which is the statement meant to stand after the test is performed. The two hypotheses must be mutually exclusive ; moreover, in most applications, the two are complementary (one being the negation of the other). The test works by comparing the \(p\)-value to the level of significance (a chosen target). If the \(p\)-value is less than or equal to the level of significance, then the null hypothesis is rejected.

When analyzing data, only samples of a certain size might be manageable as efficient computations. In some situations the error terms follow a continuous or infinite distribution, hence the use of samples to suggest accuracy of the chosen test statistics. The method of hypothesis testing gives an advantage over guessing what distribution or which parameters the data follows.

Definitions and Methodology

Hypothesis test and confidence intervals.

In statistical inference, properties (parameters) of a population are analyzed by sampling data sets. Given assumptions on the distribution, i.e. a statistical model of the data, certain hypotheses can be deduced from the known behavior of the model. These hypotheses must be tested against sampled data from the population.

The null hypothesis \((\)denoted \(H_0)\) is a statement that is assumed to be true. If the null hypothesis is rejected, then there is enough evidence (statistical significance) to accept the alternate hypothesis \((\)denoted \(H_1).\) Before doing any test for significance, both hypotheses must be clearly stated and non-conflictive, i.e. mutually exclusive, statements. Rejecting the null hypothesis, given that it is true, is called a type I error and it is denoted \(\alpha\), which is also its probability of occurrence. Failing to reject the null hypothesis, given that it is false, is called a type II error and it is denoted \(\beta\), which is also its probability of occurrence. Also, \(\alpha\) is known as the significance level , and \(1-\beta\) is known as the power of the test. \(H_0\) \(\textbf{is true}\)\(\hspace{15mm}\) \(H_0\) \(\textbf{is false}\) \(\textbf{Reject}\) \(H_0\)\(\hspace{10mm}\) Type I error Correct Decision \(\textbf{Reject}\) \(H_1\) Correct Decision Type II error The test statistic is the standardized value following the sampled data under the assumption that the null hypothesis is true, and a chosen particular test. These tests depend on the statistic to be studied and the assumed distribution it follows, e.g. the population mean following a normal distribution. The \(p\)-value is the probability of observing an extreme test statistic in the direction of the alternate hypothesis, given that the null hypothesis is true. The critical value is the value of the assumed distribution of the test statistic such that the probability of making a type I error is small.
Methodologies: Given an estimator \(\hat \theta\) of a population statistic \(\theta\), following a probability distribution \(P(T)\), computed from a sample \(\mathcal{S},\) and given a significance level \(\alpha\) and test statistic \(t^*,\) define \(H_0\) and \(H_1;\) compute the test statistic \(t^*.\) \(p\)-value Approach (most prevalent): Find the \(p\)-value using \(t^*\) (right-tailed). If the \(p\)-value is at most \(\alpha,\) reject \(H_0\). Otherwise, reject \(H_1\). Critical Value Approach: Find the critical value solving the equation \(P(T\geq t_\alpha)=\alpha\) (right-tailed). If \(t^*>t_\alpha\), reject \(H_0\). Otherwise, reject \(H_1\). Note: Failing to reject \(H_0\) only means inability to accept \(H_1\), and it does not mean to accept \(H_0\).
Assume a normally distributed population has recorded cholesterol levels with various statistics computed. From a sample of 100 subjects in the population, the sample mean was 214.12 mg/dL (milligrams per deciliter), with a sample standard deviation of 45.71 mg/dL. Perform a hypothesis test, with significance level 0.05, to test if there is enough evidence to conclude that the population mean is larger than 200 mg/dL. Hypothesis Test We will perform a hypothesis test using the \(p\)-value approach with significance level \(\alpha=0.05:\) Define \(H_0\): \(\mu=200\). Define \(H_1\): \(\mu>200\). Since our values are normally distributed, the test statistic is \(z^*=\frac{\bar X - \mu_0}{\frac{s}{\sqrt{n}}}=\frac{214.12 - 200}{\frac{45.71}{\sqrt{100}}}\approx 3.09\). Using a standard normal distribution, we find that our \(p\)-value is approximately \(0.001\). Since the \(p\)-value is at most \(\alpha=0.05,\) we reject \(H_0\). Therefore, we can conclude that the test shows sufficient evidence to support the claim that \(\mu\) is larger than \(200\) mg/dL.

If the sample size was smaller, the normal and \(t\)-distributions behave differently. Also, the question itself must be managed by a double-tail test instead.

Assume a population's cholesterol levels are recorded and various statistics are computed. From a sample of 25 subjects, the sample mean was 214.12 mg/dL (milligrams per deciliter), with a sample standard deviation of 45.71 mg/dL. Perform a hypothesis test, with significance level 0.05, to test if there is enough evidence to conclude that the population mean is not equal to 200 mg/dL. Hypothesis Test We will perform a hypothesis test using the \(p\)-value approach with significance level \(\alpha=0.05\) and the \(t\)-distribution with 24 degrees of freedom: Define \(H_0\): \(\mu=200\). Define \(H_1\): \(\mu\neq 200\). Using the \(t\)-distribution, the test statistic is \(t^*=\frac{\bar X - \mu_0}{\frac{s}{\sqrt{n}}}=\frac{214.12 - 200}{\frac{45.71}{\sqrt{25}}}\approx 1.54\). Using a \(t\)-distribution with 24 degrees of freedom, we find that our \(p\)-value is approximately \(2(0.068)=0.136\). We have multiplied by two since this is a two-tailed argument, i.e. the mean can be smaller than or larger than. Since the \(p\)-value is larger than \(\alpha=0.05,\) we fail to reject \(H_0\). Therefore, the test does not show sufficient evidence to support the claim that \(\mu\) is not equal to \(200\) mg/dL.

The complement of the rejection on a two-tailed hypothesis test (with significance level \(\alpha\)) for a population parameter \(\theta\) is equivalent to finding a confidence interval \((\)with confidence level \(1-\alpha)\) for the population parameter \(\theta\). If the assumption on the parameter \(\theta\) falls inside the confidence interval, then the test has failed to reject the null hypothesis \((\)with \(p\)-value greater than \(\alpha).\) Otherwise, if \(\theta\) does not fall in the confidence interval, then the null hypothesis is rejected in favor of the alternate \((\)with \(p\)-value at most \(\alpha).\)

  • Statistics (Estimation)
  • Normal Distribution
  • Correlation
  • Confidence Intervals

Problem Loading...

Note Loading...

Set Loading...

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Crit Care Med
  • v.23(Suppl 3); 2019 Sep

An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors

Priya ranganathan.

1 Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Mumbai, Maharashtra, India

2 Department of Surgical Oncology, Tata Memorial Centre, Mumbai, Maharashtra, India

The second article in this series on biostatistics covers the concepts of sample, population, research hypotheses and statistical errors.

How to cite this article

Ranganathan P, Pramesh CS. An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors. Indian J Crit Care Med 2019;23(Suppl 3):S230–S231.

Two papers quoted in this issue of the Indian Journal of Critical Care Medicine report. The results of studies aim to prove that a new intervention is better than (superior to) an existing treatment. In the ABLE study, the investigators wanted to show that transfusion of fresh red blood cells would be superior to standard-issue red cells in reducing 90-day mortality in ICU patients. 1 The PROPPR study was designed to prove that transfusion of a lower ratio of plasma and platelets to red cells would be superior to a higher ratio in decreasing 24-hour and 30-day mortality in critically ill patients. 2 These studies are known as superiority studies (as opposed to noninferiority or equivalence studies which will be discussed in a subsequent article).

SAMPLE VERSUS POPULATION

A sample represents a group of participants selected from the entire population. Since studies cannot be carried out on entire populations, researchers choose samples, which are representative of the population. This is similar to walking into a grocery store and examining a few grains of rice or wheat before purchasing an entire bag; we assume that the few grains that we select (the sample) are representative of the entire sack of grains (the population).

The results of the study are then extrapolated to generate inferences about the population. We do this using a process known as hypothesis testing. This means that the results of the study may not always be identical to the results we would expect to find in the population; i.e., there is the possibility that the study results may be erroneous.

HYPOTHESIS TESTING

A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the “alternate” hypothesis, and the opposite is called the “null” hypothesis; every study has a null hypothesis and an alternate hypothesis. For superiority studies, the alternate hypothesis states that one treatment (usually the new or experimental treatment) is superior to the other; the null hypothesis states that there is no difference between the treatments (the treatments are equal). For example, in the ABLE study, we start by stating the null hypothesis—there is no difference in mortality between groups receiving fresh RBCs and standard-issue RBCs. We then state the alternate hypothesis—There is a difference between groups receiving fresh RBCs and standard-issue RBCs. It is important to note that we have stated that the groups are different, without specifying which group will be better than the other. This is known as a two-tailed hypothesis and it allows us to test for superiority on either side (using a two-sided test). This is because, when we start a study, we are not 100% certain that the new treatment can only be better than the standard treatment—it could be worse, and if it is so, the study should pick it up as well. One tailed hypothesis and one-sided statistical testing is done for non-inferiority studies, which will be discussed in a subsequent paper in this series.

STATISTICAL ERRORS

There are two possibilities to consider when interpreting the results of a superiority study. The first possibility is that there is truly no difference between the treatments but the study finds that they are different. This is called a Type-1 error or false-positive error or alpha error. This means falsely rejecting the null hypothesis.

The second possibility is that there is a difference between the treatments and the study does not pick up this difference. This is called a Type 2 error or false-negative error or beta error. This means falsely accepting the null hypothesis.

The power of the study is the ability to detect a difference between groups and is the converse of the beta error; i.e., power = 1-beta error. Alpha and beta errors are finalized when the protocol is written and form the basis for sample size calculation for the study. In an ideal world, we would not like any error in the results of our study; however, we would need to do the study in the entire population (infinite sample size) to be able to get a 0% alpha and beta error. These two errors enable us to do studies with realistic sample sizes, with the compromise that there is a small possibility that the results may not always reflect the truth. The basis for this will be discussed in a subsequent paper in this series dealing with sample size calculation.

Conventionally, type 1 or alpha error is set at 5%. This means, that at the end of the study, if there is a difference between groups, we want to be 95% certain that this is a true difference and allow only a 5% probability that this difference has occurred by chance (false positive). Type 2 or beta error is usually set between 10% and 20%; therefore, the power of the study is 90% or 80%. This means that if there is a difference between groups, we want to be 80% (or 90%) certain that the study will detect that difference. For example, in the ABLE study, sample size was calculated with a type 1 error of 5% (two-sided) and power of 90% (type 2 error of 10%) (1).

Table 1 gives a summary of the two types of statistical errors with an example

Statistical errors

(a) Types of statistical errors
: Null hypothesis is
TrueFalse
Null hypothesis is actuallyTrueCorrect results!Falsely rejecting null hypothesis - Type I error
FalseFalsely accepting null hypothesis - Type II errorCorrect results!
(b) Possible statistical errors in the ABLE trial
There is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsThere difference in mortality between groups receiving fresh RBCs and standard-issue RBCs
TruthThere is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsCorrect results!Falsely rejecting null hypothesis - Type I error
There difference in mortality between groups receiving fresh RBCs and standard-issue RBCsFalsely accepting null hypothesis - Type II errorCorrect results!

In the next article in this series, we will look at the meaning and interpretation of ‘ p ’ value and confidence intervals for hypothesis testing.

Source of support: Nil

Conflict of interest: None

  • Hypothesis Testing: Definition, Uses, Limitations + Examples

busayo.longe

Hypothesis testing is as old as the scientific method and is at the heart of the research process. 

Research exists to validate or disprove assumptions about various phenomena. The process of validation involves testing and it is in this context that we will explore hypothesis testing. 

What is a Hypothesis? 

A hypothesis is a calculated prediction or assumption about a population parameter based on limited evidence. The whole idea behind hypothesis formulation is testing—this means the researcher subjects his or her calculated assumption to a series of evaluations to know whether they are true or false. 

Typically, every research starts with a hypothesis—the investigator makes a claim and experiments to prove that this claim is true or false . For instance, if you predict that students who drink milk before class perform better than those who don’t, then this becomes a hypothesis that can be confirmed or refuted using an experiment.  

Read: What is Empirical Research Study? [Examples & Method]

What are the Types of Hypotheses? 

1. simple hypothesis.

Also known as a basic hypothesis, a simple hypothesis suggests that an independent variable is responsible for a corresponding dependent variable. In other words, an occurrence of the independent variable inevitably leads to an occurrence of the dependent variable. 

Typically, simple hypotheses are considered as generally true, and they establish a causal relationship between two variables. 

Examples of Simple Hypothesis  

  • Drinking soda and other sugary drinks can cause obesity. 
  • Smoking cigarettes daily leads to lung cancer.

2. Complex Hypothesis

A complex hypothesis is also known as a modal. It accounts for the causal relationship between two independent variables and the resulting dependent variables. This means that the combination of the independent variables leads to the occurrence of the dependent variables . 

Examples of Complex Hypotheses  

  • Adults who do not smoke and drink are less likely to develop liver-related conditions.
  • Global warming causes icebergs to melt which in turn causes major changes in weather patterns.

3. Null Hypothesis

As the name suggests, a null hypothesis is formed when a researcher suspects that there’s no relationship between the variables in an observation. In this case, the purpose of the research is to approve or disapprove this assumption. 

Examples of Null Hypothesis

  • This is no significant change in a student’s performance if they drink coffee or tea before classes. 
  • There’s no significant change in the growth of a plant if one uses distilled water only or vitamin-rich water. 
Read: Research Report: Definition, Types + [Writing Guide]

4. Alternative Hypothesis 

To disapprove a null hypothesis, the researcher has to come up with an opposite assumption—this assumption is known as the alternative hypothesis. This means if the null hypothesis says that A is false, the alternative hypothesis assumes that A is true. 

An alternative hypothesis can be directional or non-directional depending on the direction of the difference. A directional alternative hypothesis specifies the direction of the tested relationship, stating that one variable is predicted to be larger or smaller than the null value while a non-directional hypothesis only validates the existence of a difference without stating its direction. 

Examples of Alternative Hypotheses  

  • Starting your day with a cup of tea instead of a cup of coffee can make you more alert in the morning. 
  • The growth of a plant improves significantly when it receives distilled water instead of vitamin-rich water. 

5. Logical Hypothesis

Logical hypotheses are some of the most common types of calculated assumptions in systematic investigations. It is an attempt to use your reasoning to connect different pieces in research and build a theory using little evidence. In this case, the researcher uses any data available to him, to form a plausible assumption that can be tested. 

Examples of Logical Hypothesis

  • Waking up early helps you to have a more productive day. 
  • Beings from Mars would not be able to breathe the air in the atmosphere of the Earth. 

6. Empirical Hypothesis  

After forming a logical hypothesis, the next step is to create an empirical or working hypothesis. At this stage, your logical hypothesis undergoes systematic testing to prove or disprove the assumption. An empirical hypothesis is subject to several variables that can trigger changes and lead to specific outcomes. 

Examples of Empirical Testing 

  • People who eat more fish run faster than people who eat meat.
  • Women taking vitamin E grow hair faster than those taking vitamin K.

7. Statistical Hypothesis

When forming a statistical hypothesis, the researcher examines the portion of a population of interest and makes a calculated assumption based on the data from this sample. A statistical hypothesis is most common with systematic investigations involving a large target audience. Here, it’s impossible to collect responses from every member of the population so you have to depend on data from your sample and extrapolate the results to the wider population. 

Examples of Statistical Hypothesis  

  • 45% of students in Louisiana have middle-income parents. 
  • 80% of the UK’s population gets a divorce because of irreconcilable differences.

What is Hypothesis Testing? 

Hypothesis testing is an assessment method that allows researchers to determine the plausibility of a hypothesis. It involves testing an assumption about a specific population parameter to know whether it’s true or false. These population parameters include variance, standard deviation, and median. 

Typically, hypothesis testing starts with developing a null hypothesis and then performing several tests that support or reject the null hypothesis. The researcher uses test statistics to compare the association or relationship between two or more variables. 

Explore: Research Bias: Definition, Types + Examples

Researchers also use hypothesis testing to calculate the coefficient of variation and determine if the regression relationship and the correlation coefficient are statistically significant.

How Hypothesis Testing Works

The basis of hypothesis testing is to examine and analyze the null hypothesis and alternative hypothesis to know which one is the most plausible assumption. Since both assumptions are mutually exclusive, only one can be true. In other words, the occurrence of a null hypothesis destroys the chances of the alternative coming to life, and vice-versa. 

Interesting: 21 Chrome Extensions for Academic Researchers in 2021

What Are The Stages of Hypothesis Testing?  

To successfully confirm or refute an assumption, the researcher goes through five (5) stages of hypothesis testing; 

  • Determine the null hypothesis
  • Specify the alternative hypothesis
  • Set the significance level
  • Calculate the test statistics and corresponding P-value
  • Draw your conclusion
  • Determine the Null Hypothesis

Like we mentioned earlier, hypothesis testing starts with creating a null hypothesis which stands as an assumption that a certain statement is false or implausible. For example, the null hypothesis (H0) could suggest that different subgroups in the research population react to a variable in the same way. 

  • Specify the Alternative Hypothesis

Once you know the variables for the null hypothesis, the next step is to determine the alternative hypothesis. The alternative hypothesis counters the null assumption by suggesting the statement or assertion is true. Depending on the purpose of your research, the alternative hypothesis can be one-sided or two-sided. 

Using the example we established earlier, the alternative hypothesis may argue that the different sub-groups react differently to the same variable based on several internal and external factors. 

  • Set the Significance Level

Many researchers create a 5% allowance for accepting the value of an alternative hypothesis, even if the value is untrue. This means that there is a 0.05 chance that one would go with the value of the alternative hypothesis, despite the truth of the null hypothesis. 

Something to note here is that the smaller the significance level, the greater the burden of proof needed to reject the null hypothesis and support the alternative hypothesis.

Explore: What is Data Interpretation? + [Types, Method & Tools]
  • Calculate the Test Statistics and Corresponding P-Value 

Test statistics in hypothesis testing allow you to compare different groups between variables while the p-value accounts for the probability of obtaining sample statistics if your null hypothesis is true. In this case, your test statistics can be the mean, median and similar parameters. 

If your p-value is 0.65, for example, then it means that the variable in your hypothesis will happen 65 in100 times by pure chance. Use this formula to determine the p-value for your data: 

what hypothesis testing definition

  • Draw Your Conclusions

After conducting a series of tests, you should be able to agree or refute the hypothesis based on feedback and insights from your sample data.  

Applications of Hypothesis Testing in Research

Hypothesis testing isn’t only confined to numbers and calculations; it also has several real-life applications in business, manufacturing, advertising, and medicine. 

In a factory or other manufacturing plants, hypothesis testing is an important part of quality and production control before the final products are approved and sent out to the consumer. 

During ideation and strategy development, C-level executives use hypothesis testing to evaluate their theories and assumptions before any form of implementation. For example, they could leverage hypothesis testing to determine whether or not some new advertising campaign, marketing technique, etc. causes increased sales. 

In addition, hypothesis testing is used during clinical trials to prove the efficacy of a drug or new medical method before its approval for widespread human usage. 

What is an Example of Hypothesis Testing?

An employer claims that her workers are of above-average intelligence. She takes a random sample of 20 of them and gets the following results: 

Mean IQ Scores: 110

Standard Deviation: 15 

Mean Population IQ: 100

Step 1: Using the value of the mean population IQ, we establish the null hypothesis as 100.

Step 2: State that the alternative hypothesis is greater than 100.

Step 3: State the alpha level as 0.05 or 5% 

Step 4: Find the rejection region area (given by your alpha level above) from the z-table. An area of .05 is equal to a z-score of 1.645.

Step 5: Calculate the test statistics using this formula

what hypothesis testing definition

Z = (110–100) ÷ (15÷√20) 

10 ÷ 3.35 = 2.99 

If the value of the test statistics is higher than the value of the rejection region, then you should reject the null hypothesis. If it is less, then you cannot reject the null. 

In this case, 2.99 > 1.645 so we reject the null. 

Importance/Benefits of Hypothesis Testing 

The most significant benefit of hypothesis testing is it allows you to evaluate the strength of your claim or assumption before implementing it in your data set. Also, hypothesis testing is the only valid method to prove that something “is or is not”. Other benefits include: 

  • Hypothesis testing provides a reliable framework for making any data decisions for your population of interest. 
  • It helps the researcher to successfully extrapolate data from the sample to the larger population. 
  • Hypothesis testing allows the researcher to determine whether the data from the sample is statistically significant. 
  • Hypothesis testing is one of the most important processes for measuring the validity and reliability of outcomes in any systematic investigation. 
  • It helps to provide links to the underlying theory and specific research questions.

Criticism and Limitations of Hypothesis Testing

Several limitations of hypothesis testing can affect the quality of data you get from this process. Some of these limitations include: 

  • The interpretation of a p-value for observation depends on the stopping rule and definition of multiple comparisons. This makes it difficult to calculate since the stopping rule is subject to numerous interpretations, plus “multiple comparisons” are unavoidably ambiguous. 
  • Conceptual issues often arise in hypothesis testing, especially if the researcher merges Fisher and Neyman-Pearson’s methods which are conceptually distinct. 
  • In an attempt to focus on the statistical significance of the data, the researcher might ignore the estimation and confirmation by repeated experiments.
  • Hypothesis testing can trigger publication bias, especially when it requires statistical significance as a criterion for publication.
  • When used to detect whether a difference exists between groups, hypothesis testing can trigger absurd assumptions that affect the reliability of your observation.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • alternative hypothesis
  • alternative vs null hypothesis
  • complex hypothesis
  • empirical hypothesis
  • hypothesis testing
  • logical hypothesis
  • simple hypothesis
  • statistical hypothesis
  • busayo.longe

Formplus

You may also like:

Alternative vs Null Hypothesis: Pros, Cons, Uses & Examples

We are going to discuss alternative hypotheses and null hypotheses in this post and how they work in research.

what hypothesis testing definition

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Internal Validity in Research: Definition, Threats, Examples

In this article, we will discuss the concept of internal validity, some clear examples, its importance, and how to test it.

Type I vs Type II Errors: Causes, Examples & Prevention

This article will discuss the two different types of errors in hypothesis testing and how you can prevent them from occurring in your research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Back to blog home

Hypothesis testing explained in 4 parts, yuzheng sun, phd.

As data scientists, Hypothesis Testing is expected to be well understood, but often not in reality. It is mainly because our textbooks blend two schools of thought – p-value and significance testing vs. hypothesis testing – inconsistently.

For example, some questions are not obvious unless you have thought through them before:

Are power or beta dependent on the null hypothesis?

Can we accept the null hypothesis? Why?

How does MDE change with alpha holding beta constant?

Why do we use standard error in Hypothesis Testing but not the standard deviation?

Why can’t we be specific about the alternative hypothesis so we can properly model it?

Why is the fundamental tradeoff of the Hypothesis Testing about mistake vs. discovery, not about alpha vs. beta?

Addressing this problem is not easy. The topic of Hypothesis Testing is convoluted. In this article, there are 10 concepts that we will introduce incrementally, aid you with visualizations, and include intuitive explanations. After this article, you will have clear answers to the questions above that you truly understand on a first-principle level and explain these concepts well to your stakeholders.

We break this article into four parts.

Set up the question properly using core statistical concepts, and connect them to Hypothesis Testing, while striking a balance between technically correct and simplicity. Specifically, 

We emphasize a clear distinction between the standard deviation and the standard error, and why the latter is used in Hypothesis Testing

We explain fully when can you “accept” a hypothesis, when shall you say “failing to reject” instead of “accept”, and why

Introduce alpha, type I error, and the critical value with the null hypothesis

Introduce beta, type II error, and power with the alternative hypothesis

Introduce minimum detectable effects and the relationship between the factors with power calculations , with a high-level summary and practical recommendations

Part 1 - Hypothesis Testing, the central limit theorem, population, sample, standard deviation, and standard error

In Hypothesis Testing, we begin with a null hypothesis , which generally asserts that there is no effect between our treatment and control groups. Commonly, this is expressed as the difference in means between the treatment and control groups being zero.

The central limit theorem suggests an important property of this difference in means — given a sufficiently large sample size, the underlying distribution of this difference in means will approximate a normal distribution, regardless of the population's original distribution. There are two notes:

1. The distribution of the population for the treatment and control groups can vary, but the observed means (when you observe many samples and calculate many means) are always normally distributed with a large enough sample. Below is a chart, where the n=10 and n=30 correspond to the underlying distribution of the sample means.

Central Limit Theorem

2. Pay attention to “the underlying distribution”. Standard deviation vs. standard error is a potentially confusing concept. Let’s clarify.

Standard deviation vs. Standard error

Let’s declare our null hypothesis as having no treatment effect. Then, to simplify, let’s propose the following normal distribution with a mean of 0 and a standard deviation of 1 as the range of possible outcomes with probabilities associated with this null hypothesis.

Standard Deviation v Standard Error

The language around population, sample, group, and estimators can get confusing. Again, to simplify, let’s forget that the null hypothesis is about the mean estimator, and declare that we can either observe the mean hypothesis once or many times. When we observe it many times, it forms a sample*, and our goal is to make decisions based on this sample.

* For technical folks, the observation is actually about a single sample, many samples are a group, and the difference in groups is the distribution we are talking about as the mean hypothesis. The red curve represents the distribution of the estimator of this difference, and then we can have another sample consisting of many observations of this estimator. In my simplified language, the red curve is the distribution of the estimator, and the blue curve with sample size is the repeated observations of it. If you have a better way to express these concepts without causing confusiongs, please suggest.

This probability density function means if there is one realization from this distribution, the realitization can be anywhere on the x-axis, with the relative likelihood on the y-axis.

If we draw multiple observations , they form a sample . Each observation in this sample follows the property of this underlying distribution – more likely to be close to 0, and equally likely to be on either side, which makes the odds of positive and negative cancel each other out, so the mean of this sample is even more centered around 0.

We use the standard error to represent the error of our “sample mean” . 

The standard error = the standard deviation of the observed sample / sqrt (sample size). 

For a sample size of 30, the standard error is roughly 0.18. Compared with the underlying distribution, the distribution of the sample mean is much narrower.

Standard Deviation and Standard Error 2 Images

In Hypothesis Testing, we try to draw some conclusions – is there a treatment effect or not? – based on a sample. So when we talk about alpha and beta, which are the probabilities of type I and type II errors , we are talking about the probabilities based on the plot of sample means and standard error .

Part 2, The null hypothesis: alpha and the critical value

From Part 1, we stated that a null hypothesis is commonly expressed as the difference in means between the treatment and control groups being zero.

Without loss of generality*, let’s assume the underlying distribution of our null hypothesis is mean 0 and standard deviation 1

Then the sample mean of the null hypothesis is 0 and the standard error of 1/√ n, where n is the sample size.

When the sample size is 30, this distribution has a standard error of ≈0.18 looks like the below. 

Null Hypothesis YZ

*: A note for the technical readers: The null hypothesis is about the difference in means, but here, without complicating things, we made the subtle change to just draw the distribution of this “estimator of this difference in means”. Everything below speaks to this “estimator”.

The reason we have the null hypothesis is that we want to make judgments, particularly whether a  treatment effect exists. But in the world of probabilities, any observation, and any sample mean can happen, with different probabilities. So we need a decision rule to help us quantify our risk of making mistakes.

The decision rule is, let’s set a threshold. When the sample mean is above the threshold, we reject the null hypothesis; when the sample mean is below the threshold, we accept the null hypothesis.

Accepting a hypothesis vs. failing to reject a hypothesis

It’s worth noting that you may have heard of “we never accept a hypothesis, we just fail to reject a hypothesis” and be subconsciously confused by it. The deep reason is that modern textbooks do an inconsistent blend of Fisher’s significance testing and Neyman-Pearson’s Hypothesis Testing definitions and ignore important caveats ( ref ). To clarify:

First of all, we can never “prove” a particular hypothesis given any observations, because there are infinitely many true hypotheses (with different probabilities) given an observation. We will visualize it in Part 3.

Second, “accepting” a hypothesis does not mean that you believe in it, but only that you act as if it were true. So technically, there is no problem with “accepting” a hypothesis.

But, third, when we talk about p-values and confidence intervals, “accepting” the null hypothesis is at best confusing. The reason is that “the p-value above the threshold” just means we failed to reject the null hypothesis. In the strict Fisher’s p-value framework, there is no alternative hypothesis. While we have a clear criterion for rejecting the null hypothesis (p < alpha), we don't have a similar clear-cut criterion for "accepting" the null hypothesis based on beta.

So the dangers in calling “accepting a hypothesis” in the p-value setting are:

Many people misinterpret “accepting” the null hypothesis as “proving” the null hypothesis, which is wrong; 

“Accepting the null hypothesis” is not rigorously defined, and doesn’t speak to the purpose of the test, which is about whether or not we reject the null hypothesis. 

In this article, we will stay consistent within the Neyman-Pearson framework , where “accepting” a hypothesis is legal and necessary. Otherwise, we cannot draw any distributions without acting as if some hypothesis was true.

You don’t need to know the name Neyman-Pearson to understand anything, but pay attention to our language, as we choose our words very carefully to avoid mistakes and confusion.

So far, we have constructed a simple world of one hypothesis as the only truth, and a decision rule with two potential outcomes – one of the outcomes is “reject the null hypothesis when it is true” and the other outcome is “accept the null hypothesis when it is true”. The likelihoods of both outcomes come from the distribution where the null hypothesis is true.

Later, when we introduce the alternative hypothesis and MDE, we will gradually walk into the world of infinitely many alternative hypotheses and visualize why we cannot “prove” a hypothesis.

We save the distinction between the p-value/significance framework vs. Hypothesis Testing in another article where you will have the full picture.

Type I error, alpha, and the critical value

We’re able to construct a distribution of the sample mean for this null hypothesis using the standard error. Since we only have the null hypothesis as the truth of our universe, we can only make one type of mistake – falsely rejecting the null hypothesis when it is true. This is the type I error , and the probability is called alpha . Suppose we want alpha to be 5%. We can calculate the threshold required to make it happen. This threshold is called the critical value . Below is the chart we further constructed with our sample of 30.

Type I Error Alpha Critical Value

In this chart, alpha is the blue area under the curve. The critical value is 0.3. If our sample mean is above 0.3, we reject the null hypothesis. We have a 5% chance of making the type I error.

Type I error: Falsely rejecting the null hypothesis when the null hypothesis is true

Alpha: The probability of making a Type I error

Critical value: The threshold to determine whether the null hypothesis is to be rejected or not

Part 3, The alternative hypothesis: beta and power

You may have noticed in part 2 that we only spoke to Type I error – rejecting the null hypothesis when it is true. What about the Type II error – falsely accepting the null hypothesis when it is not true?

But it is weird to call “accepting” false unless we know the truth. So we need an alternative hypothesis which serves as the alternative truth. 

Alternative hypotheses are theoretical constructs

There is an important concept that most textbooks fail to emphasize – that is, you can have infinitely many alternative hypotheses for a given null hypothesis, we just choose one. None of them are more special or “real” than the others. 

Let’s visualize it with an example. Suppose we observed a sample mean of 0.51, what is the true alternative hypothesis?

Alternative hypotheses theoretical

With this visualization, you can see why we have “infinitely many alternative hypotheses” because, given the observation, there is an infinite number of alternative hypotheses (plus the null hypothesis) that can be true, each with different probabilities. Some are more likely than others, but all are possible.

Remember, alternative hypotheses are a theoretical construct. We choose one particular alternative hypothesis to calculate certain probabilities. By now, we should have more understanding of why we cannot “accept” the null hypothesis given an observation. We can’t prove that the null hypothesis is true, we just fail to accept it given the observation and our pre-determined decision rule. 

We will fully reconcile this idea of picking one alternative hypothesis out of the world of infinite possibilities when we talk about MDE. The idea of “accept” vs. “fail to reject” is deeper, and we won’t cover it fully in this article. We will do so when we have an article about the p-value and the confidence interval.

Type II error and Beta

For the sake of simplicity and easy comparison, let’s choose an alternative hypothesis with a mean of 0.5, and a standard deviation of

1. Again, with a sample size of 30, the standard error ≈0.18. There are now two potential “truths” in our simple universe.

Type II Error and Beta

Remember from the null hypothesis, we want alpha to be 5% so the corresponding critical value is 0.30. We modify our rule as follows:

If the observation is above 0.30, we reject the null hypothesis and accept the alternative hypothesis ; 

If the observation is below 0.30, we accept the null hypothesis and reject the alternative hypothesis .

Reject alternative and accept null

With the introduction of the alternative hypothesis, the alternative “(hypothesized) truth”, we can call “accepting the null hypothesis and rejecting the alternative hypothesis” a mistake – the Type II error. We can also calculate the probability of this mistake. This is called beta, which is illustrated by the red area below.

Null hypothesis alternative hypothesis

From the visualization, we can see that beta is conditional on the alternative hypothesis and the critical value. Let’s elaborate on these two relationships one by one, very explicitly, as both of them are important.

First, Let’s visualize how beta changes with the mean of the alternative hypothesis by setting another alternative hypothesis where mean = 1 instead of 0.5

Sample Size 30 for Null and Alternative Hypothesis

Beta change from 13.7% to 0.0%. Namely, beta is the probability of falsely rejecting a particular alternative hypothesis when we assume it is true. When we assume a different alternative hypothesis is true, we get a different beta. So strictly speaking, beta only speaks to the probability of falsely rejecting a particular alternative hypothesis when it is true . Nothing else. It’s only under other conditions, that “rejecting the alternative hypothesis” implies “accepting” the null hypothesis or “failing to accept the null hypothesis”. We will further elaborate when we talk about p-value and confidence interval in another article. But what we talked about so far is true and enough for understanding power.

Second, there is a relationship between alpha and beta. Namely, given the null hypothesis and the alternative hypothesis, alpha would determine the critical value, and the critical value determines beta. This speaks to the tradeoff between mistake and discovery. 

If we tolerate more alpha, we will have a smaller critical value, and for the same beta, we can detect a smaller alternative hypothesis

If we tolerate more beta, we can also detect a smaller alternative hypothesis. 

In short, if we tolerate more mistakes (either Type I or Type II), we can detect a smaller true effect. Mistake vs. discovery is the fundamental tradeoff of Hypothesis Testing.

So tolerating more mistakes leads to more chance of discovery. This is the concept of MDE that we will elaborate on in part 4.

Finally, we’re ready to define power. Power is an important and fundamental topic in statistical testing, and we’ll explain the concept in three different ways.

Three ways to understand power

First, the technical definition of power is 1−β. It represents that given an alternative hypothesis and given our null, sample size, and decision rule (alpha = 0.05), the probability is that we accept this particular hypothesis. We visualize the yellow area below.

Understand Power Hypothesis

Second, power is really intuitive in its definition. A real-world example is trying to determine the most popular car manufacturer in the world. If I observe one car and see one brand, my observation is not very powerful. But if I observe a million cars, my observation is very powerful. Powerful tests mean that I have a high chance of detecting a true effect.

Third, to illustrate the two concepts concisely, let’s run a visualization by just changing the sample size from 30 to 100 and see how power increases from 86.3% to almost 100%.

Same size from 30 to 100

As the graph shows, we can easily see that power increases with sample size . The reason is that the distribution of both the null hypothesis and the alternative hypothesis became narrower as their sample means got more accurate. We are less likely to make either a type I error (which reduces the critical value) or a type II error.  

Type II error: Failing to reject the null hypothesis when the alternative hypothesis is true

Beta: The probability of making a type II error

Power: The ability of the test to detect a true effect when it’s there

Part 4, Power calculation: MDE

The relationship between mde, alternative hypothesis, and power.

Now, we are ready to tackle the most nuanced definition of them all: Minimum detectable effect (MDE). First, let’s make the sample mean of the alternative hypothesis explicit on the graph with a red dotted line.

Relationship between MDE

What if we keep the same sample size, but want power to be 80%? This is when we recall the previous chapter that “alternative hypotheses are theoretical constructs”. We can have a different alternative that corresponds to 80% power. After some calculations, we discovered that when it’s the alternative hypothesis with mean = 0.45 (if we keep the standard deviation to be 1).

MDE Alternative Hypothesis pt 2

This is where we reconcile the concept of “infinitely many alternative hypotheses” with the concept of minimum detectable delta. Remember that in statistical testing, we want more power. The “ minimum ” in the “ minimum detectable effect”, is the minimum value of the mean of the alternative hypothesis that would give us 80% power. Any alternative hypothesis with a mean to the right of MDE gives us sufficient power.

In other words, there are indeed infinitely many alternative hypotheses to the right of this mean 0.45. The particular alternative hypothesis with a mean of 0.45 gives us the minimum value where power is sufficient. We call it the minimum detectable effect, or MDE.

Not enough power MDE

The complete definition of MDE from scratch

Let’s go through how we derived MDE from the beginning:

We fixed the distribution of sample means of the null hypothesis, and fixed sample size, so we can draw the blue distribution

For our decision rule, we require alpha to be 5%. We derived that the critical value shall be 0.30 to make 5% alpha happen

We fixed the alternative hypothesis to be normally distributed with a standard deviation of 1 so the standard error is 0.18, the mean can be anywhere as there are infinitely many alternative hypotheses

For our decision rule, we require beta to be 20% or less, so our power is 80% or more. 

We derived that the minimum value of the observed mean of the alternative hypothesis that we can detect with our decision rule is 0.45. Any value above 0.45 would give us sufficient power.

How MDE changes with sample size

Now, let’s tie everything together by increasing the sample size, holding alpha and beta constant, and see how MDE changes.

How MDE changes with sample size

Narrower distribution of the sample mean + holding alpha constant -> smaller critical value from 0.3 to 0.16

+ holding beta constant -> MDE decreases from 0.45 to 0.25

This is the other key takeaway:  The larger the sample size, the smaller of an effect we can detect, and the smaller the MDE.

This is a critical takeaway for statistical testing. It suggests that even for companies not with large sample sizes if their treatment effects are large, AB testing can reliably detect it.

Statistical Power Curve

Summary of Hypothesis Testing

Let’s review all the concepts together.

Assuming the null hypothesis is correct:

Alpha: When the null hypothesis is true, the probability of rejecting it

Critical value: The threshold to determine rejecting vs. accepting the null hypothesis

Assuming an alternative hypothesis is correct:

Beta: When the alternative hypothesis is true, the probability of rejecting it

Power: The chance that a real effect will produce significant results

Power calculation:

Minimum detectable effect (MDE): Given sample sizes and distributions, the minimum mean of alternative distribution that would give us the desired alpha and sufficient power (usually alpha = 0.05 and power >= 0.8)

Relationship among the factors, all else equal: Larger sample, more power; Larger sample, smaller MDE

Everything we talk about is under the Neyman-Pearson framework. There is no need to mention the p-value and significance under this framework. Blending the two frameworks is the inconsistency brought by our textbooks. Clarifying the inconsistency and correctly blending them are topics for another day.

Practical recommendations

That’s it. But it’s only the beginning. In practice, there are many crafts in using power well, for example:

Why peeking introduces a behavior bias, and how to use sequential testing to correct it

Why having multiple comparisons affects alpha, and how to use Bonferroni correction

The relationship between sample size, duration of the experiment, and allocation of the experiment?

Treat your allocation as a resource for experimentation, understand when interaction effects are okay, and when they are not okay, and how to use layers to manage

Practical considerations for setting an MDE

Also, in the above examples, we fixed the distribution, but in reality, the variance of the distribution plays an important role. There are different ways of calculating the variance and different ways to reduce variance, such as CUPED, or stratified sampling.

Related resources:

How to calculate power with an uneven split of sample size: https://blog.statsig.com/calculating-sample-sizes-for-a-b-tests-7854d56c2646

Real-life applications: https://blog.statsig.com/you-dont-need-large-sample-sizes-to-run-a-b-tests-6044823e9992

Create a free account

2m events per month, free forever..

Sign up for Statsig and launch your first experiment in minutes.

Build fast?

Try statsig today.

what hypothesis testing definition

Recent Posts

Statsig spotlight: more powerful and flexible funnels analysis.

Statsig offers a suite of features that make funnel analysis powerful, easy, and flexible, helping you identify drop-offs and drive conversions.

How to build a Metrics Library on Statsig with Best Practices

Struggling with compiling metrics from multiple sources? Our guide helps you standardize metrics and establish clear governance with Statsig efficiently.

Your go-to guide for Online Bot Filtering

Ensure accurate data with Statsig's default Bot Filtering. Learn how bot traffic can impact metrics and experiments, and how to maintain clean, reliable data.

Statsig Spotlight: Unlock deeper user insights with cohort analysis

Statsig's cohort analysis features help you segment users and track how they engage with your product.

Statsig Seattle Tech Week Recap: Founders by Founders 5 key takeaways

Statsig at Seattle Tech Week - 5 Takeaways: Our Founders by Founders event featured CEOs and CTOs sharing startup journeys, challenges, and invaluable insights. Watch on-demand.

Controlling your Type I Errors: Bonferroni and Benjamini-Hochberg

The Benjamini-Hochberg procedure on Statsig reduces false positives in experiments by adjusting significance levels for multiple comparisons, ensuring reliable results.

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

David Cameron. President Barack Obama and Prime Minister David Cameron of the United Kingdom talk during the G8 Summit at the Lough Erne Resort in Enniskillen, Northern Ireland, June 17, 2013

hypothesis testing

Our editors will review what you’ve submitted and determine whether to revise the article.

  • NSCC Libraries Pressbooks - Introductory Business Statistics with Interactive Spreadsheets – 1st Canadian Edition - Hypothesis Testing
  • PennState - Eberly College of Science - Hypothesis Testing
  • Corporate Finance Institute - Hypothesis Testing
  • California State University, San Marcos - Hypothesis Testing
  • Milne Library - Hypothesis Testing
  • Andrews University - Hypothesis Testing
  • BCcampus Open Publishing - Hypothesis Testing
  • Khan Academy - Hypothesis testing and p-values
  • Statistics LibreTexts - Hypothesis Testing
  • Nature - Hypothesis testing
  • National Center for Biotechnology Information - PubMed Central - Hypothesis testing, type I and type II errors

hypothesis testing , In statistics , a method for testing how accurately a mathematical model based on one set of data predicts the nature of other data sets generated by the same process. Hypothesis testing grew out of quality control , in which whole batches of manufactured items are accepted or rejected based on testing relatively small samples. An initial hypothesis (null hypothesis) might predict, for example, that the widths of a precision part manufactured in batches will conform to a normal distribution with a given mean ( see mean, median, and mode ). Samples from new batches either confirm or disprove this hypothesis, which is refined based on these results.

Tutorial Playlist

Statistics tutorial, everything you need to know about the probability density function in statistics, the best guide to understand central limit theorem, an in-depth guide to measures of central tendency : mean, median and mode, the ultimate guide to understand conditional probability.

A Comprehensive Look at Percentile in Statistics

The Best Guide to Understand Bayes Theorem

Everything you need to know about the normal distribution, an in-depth explanation of cumulative distribution function, a complete guide to chi-square test, what is hypothesis testing in statistics types and examples, understanding the fundamentals of arithmetic and geometric progression, the definitive guide to understand spearman’s rank correlation, mean squared error: overview, examples, concepts and more, all you need to know about the empirical rule in statistics, the complete guide to skewness and kurtosis, a holistic look at bernoulli distribution.

All You Need to Know About Bias in Statistics

A Complete Guide to Get a Grasp of Time Series Analysis

The Key Differences Between Z-Test Vs. T-Test

The Complete Guide to Understand Pearson's Correlation

A complete guide on the types of statistical studies, everything you need to know about poisson distribution, your best guide to understand correlation vs. regression, the most comprehensive guide for beginners on what is correlation, what is hypothesis testing in statistics types and examples.

Lesson 10 of 24 By Avijeet Biswal

What Is Hypothesis Testing in Statistics? Types and Examples

Table of Contents

In today’s data-driven world, decisions are based on data all the time. Hypothesis plays a crucial role in that process, whether it may be making business decisions, in the health sector, academia, or in quality improvement. Without hypothesis & hypothesis tests, you risk drawing the wrong conclusions and making bad decisions. In this tutorial, you will look at Hypothesis Testing in Statistics.

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

What Is Hypothesis Testing in Statistics?

Hypothesis Testing is a type of statistical analysis in which you put your assumptions about a population parameter to the test. It is used to estimate the relationship between 2 statistical variables.

Let's discuss few examples of statistical hypothesis from real-life - 

  • A teacher assumes that 60% of his college's students come from lower-middle-class families.
  • A doctor believes that 3D (Diet, Dose, and Discipline) is 90% effective for diabetic patients.

Now that you know about hypothesis testing, look at the two types of hypothesis testing in statistics.

Hypothesis Testing Formula

Z = ( x̅ – μ0 ) / (σ /√n)

  • Here, x̅ is the sample mean,
  • μ0 is the population mean,
  • σ is the standard deviation,
  • n is the sample size.

How Hypothesis Testing Works?

An analyst performs hypothesis testing on a statistical sample to present evidence of the plausibility of the null hypothesis. Measurements and analyses are conducted on a random sample of the population to test a theory. Analysts use a random population sample to test two hypotheses: the null and alternative hypotheses.

The null hypothesis is typically an equality hypothesis between population parameters; for example, a null hypothesis may claim that the population means return equals zero. The alternate hypothesis is essentially the inverse of the null hypothesis (e.g., the population means the return is not equal to zero). As a result, they are mutually exclusive, and only one can be correct. One of the two possibilities, however, will always be correct.

Your Dream Career is Just Around The Corner!

Your Dream Career is Just Around The Corner!

Null Hypothesis and Alternate Hypothesis

The Null Hypothesis is the assumption that the event will not occur. A null hypothesis has no bearing on the study's outcome unless it is rejected.

H0 is the symbol for it, and it is pronounced H-naught.

The Alternate Hypothesis is the logical opposite of the null hypothesis. The acceptance of the alternative hypothesis follows the rejection of the null hypothesis. H1 is the symbol for it.

Let's understand this with an example.

A sanitizer manufacturer claims that its product kills 95 percent of germs on average. 

To put this company's claim to the test, create a null and alternate hypothesis.

H0 (Null Hypothesis): Average = 95%.

Alternative Hypothesis (H1): The average is less than 95%.

Another straightforward example to understand this concept is determining whether or not a coin is fair and balanced. The null hypothesis states that the probability of a show of heads is equal to the likelihood of a show of tails. In contrast, the alternate theory states that the probability of a show of heads and tails would be very different.

Become a Data Scientist with Hands-on Training!

Become a Data Scientist with Hands-on Training!

Hypothesis Testing Calculation With Examples

Let's consider a hypothesis test for the average height of women in the United States. Suppose our null hypothesis is that the average height is 5'4". We gather a sample of 100 women and determine that their average height is 5'5". The standard deviation of population is 2.

To calculate the z-score, we would use the following formula:

z = ( x̅ – μ0 ) / (σ /√n)

z = (5'5" - 5'4") / (2" / √100)

z = 0.5 / (0.045)

We will reject the null hypothesis as the z-score of 11.11 is very large and conclude that there is evidence to suggest that the average height of women in the US is greater than 5'4".

Steps of Hypothesis Testing

Hypothesis testing is a statistical method to determine if there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. Here’s a breakdown of the typical steps involved in hypothesis testing:

Formulate Hypotheses

  • Null Hypothesis (H0): This hypothesis states that there is no effect or difference, and it is the hypothesis you attempt to reject with your test.
  • Alternative Hypothesis (H1 or Ha): This hypothesis is what you might believe to be true or hope to prove true. It is usually considered the opposite of the null hypothesis.

Choose the Significance Level (α)

The significance level, often denoted by alpha (α), is the probability of rejecting the null hypothesis when it is true. Common choices for α are 0.05 (5%), 0.01 (1%), and 0.10 (10%).

Select the Appropriate Test

Choose a statistical test based on the type of data and the hypothesis. Common tests include t-tests, chi-square tests, ANOVA, and regression analysis. The selection depends on data type, distribution, sample size, and whether the hypothesis is one-tailed or two-tailed.

Collect Data

Gather the data that will be analyzed in the test. This data should be representative of the population to infer conclusions accurately.

Calculate the Test Statistic

Based on the collected data and the chosen test, calculate a test statistic that reflects how much the observed data deviates from the null hypothesis.

Determine the p-value

The p-value is the probability of observing test results at least as extreme as the results observed, assuming the null hypothesis is correct. It helps determine the strength of the evidence against the null hypothesis.

Make a Decision

Compare the p-value to the chosen significance level:

  • If the p-value ≤ α: Reject the null hypothesis, suggesting sufficient evidence in the data supports the alternative hypothesis.
  • If the p-value > α: Do not reject the null hypothesis, suggesting insufficient evidence to support the alternative hypothesis.

Report the Results

Present the findings from the hypothesis test, including the test statistic, p-value, and the conclusion about the hypotheses.

Perform Post-hoc Analysis (if necessary)

Depending on the results and the study design, further analysis may be needed to explore the data more deeply or to address multiple comparisons if several hypotheses were tested simultaneously.

Types of Hypothesis Testing

To determine whether a discovery or relationship is statistically significant, hypothesis testing uses a z-test. It usually checks to see if two means are the same (the null hypothesis). Only when the population standard deviation is known and the sample size is 30 data points or more, can a z-test be applied.

A statistical test called a t-test is employed to compare the means of two groups. To determine whether two groups differ or if a procedure or treatment affects the population of interest, it is frequently used in hypothesis testing.

Chi-Square 

You utilize a Chi-square test for hypothesis testing concerning whether your data is as predicted. To determine if the expected and observed results are well-fitted, the Chi-square test analyzes the differences between categorical variables from a random sample. The test's fundamental premise is that the observed values in your data should be compared to the predicted values that would be present if the null hypothesis were true.

Hypothesis Testing and Confidence Intervals

Both confidence intervals and hypothesis tests are inferential techniques that depend on approximating the sample distribution. Data from a sample is used to estimate a population parameter using confidence intervals. Data from a sample is used in hypothesis testing to examine a given hypothesis. We must have a postulated parameter to conduct hypothesis testing.

Bootstrap distributions and randomization distributions are created using comparable simulation techniques. The observed sample statistic is the focal point of a bootstrap distribution, whereas the null hypothesis value is the focal point of a randomization distribution.

A variety of feasible population parameter estimates are included in confidence ranges. In this lesson, we created just two-tailed confidence intervals. There is a direct connection between these two-tail confidence intervals and these two-tail hypothesis tests. The results of a two-tailed hypothesis test and two-tailed confidence intervals typically provide the same results. In other words, a hypothesis test at the 0.05 level will virtually always fail to reject the null hypothesis if the 95% confidence interval contains the predicted value. A hypothesis test at the 0.05 level will nearly certainly reject the null hypothesis if the 95% confidence interval does not include the hypothesized parameter.

Become a Data Scientist through hands-on learning with hackathons, masterclasses, webinars, and Ask-Me-Anything! Start learning now!

Simple and Composite Hypothesis Testing

Depending on the population distribution, you can classify the statistical hypothesis into two types.

Simple Hypothesis: A simple hypothesis specifies an exact value for the parameter.

Composite Hypothesis: A composite hypothesis specifies a range of values.

A company is claiming that their average sales for this quarter are 1000 units. This is an example of a simple hypothesis.

Suppose the company claims that the sales are in the range of 900 to 1000 units. Then this is a case of a composite hypothesis.

One-Tailed and Two-Tailed Hypothesis Testing

The One-Tailed test, also called a directional test, considers a critical region of data that would result in the null hypothesis being rejected if the test sample falls into it, inevitably meaning the acceptance of the alternate hypothesis.

In a one-tailed test, the critical distribution area is one-sided, meaning the test sample is either greater or lesser than a specific value.

In two tails, the test sample is checked to be greater or less than a range of values in a Two-Tailed test, implying that the critical distribution area is two-sided.

If the sample falls within this range, the alternate hypothesis will be accepted, and the null hypothesis will be rejected.

Become a Data Scientist With Real-World Experience

Become a Data Scientist With Real-World Experience

Right Tailed Hypothesis Testing

If the larger than (>) sign appears in your hypothesis statement, you are using a right-tailed test, also known as an upper test. Or, to put it another way, the disparity is to the right. For instance, you can contrast the battery life before and after a change in production. Your hypothesis statements can be the following if you want to know if the battery life is longer than the original (let's say 90 hours):

  • The null hypothesis is (H0 <= 90) or less change.
  • A possibility is that battery life has risen (H1) > 90.

The crucial point in this situation is that the alternate hypothesis (H1), not the null hypothesis, decides whether you get a right-tailed test.

Left Tailed Hypothesis Testing

Alternative hypotheses that assert the true value of a parameter is lower than the null hypothesis are tested with a left-tailed test; they are indicated by the asterisk "<".

Suppose H0: mean = 50 and H1: mean not equal to 50

According to the H1, the mean can be greater than or less than 50. This is an example of a Two-tailed test.

In a similar manner, if H0: mean >=50, then H1: mean <50

Here the mean is less than 50. It is called a One-tailed test.

Type 1 and Type 2 Error

A hypothesis test can result in two types of errors.

Type 1 Error: A Type-I error occurs when sample results reject the null hypothesis despite being true.

Type 2 Error: A Type-II error occurs when the null hypothesis is not rejected when it is false, unlike a Type-I error.

Suppose a teacher evaluates the examination paper to decide whether a student passes or fails.

H0: Student has passed

H1: Student has failed

Type I error will be the teacher failing the student [rejects H0] although the student scored the passing marks [H0 was true]. 

Type II error will be the case where the teacher passes the student [do not reject H0] although the student did not score the passing marks [H1 is true].

Level of Significance

The alpha value is a criterion for determining whether a test statistic is statistically significant. In a statistical test, Alpha represents an acceptable probability of a Type I error. Because alpha is a probability, it can be anywhere between 0 and 1. In practice, the most commonly used alpha values are 0.01, 0.05, and 0.1, which represent a 1%, 5%, and 10% chance of a Type I error, respectively (i.e. rejecting the null hypothesis when it is in fact correct).

A p-value is a metric that expresses the likelihood that an observed difference could have occurred by chance. As the p-value decreases the statistical significance of the observed difference increases. If the p-value is too low, you reject the null hypothesis.

Here you have taken an example in which you are trying to test whether the new advertising campaign has increased the product's sales. The p-value is the likelihood that the null hypothesis, which states that there is no change in the sales due to the new advertising campaign, is true. If the p-value is .30, then there is a 30% chance that there is no increase or decrease in the product's sales.  If the p-value is 0.03, then there is a 3% probability that there is no increase or decrease in the sales value due to the new advertising campaign. As you can see, the lower the p-value, the chances of the alternate hypothesis being true increases, which means that the new advertising campaign causes an increase or decrease in sales.

Our Data Scientist Master's Program covers core topics such as R, Python, Machine Learning, Tableau, Hadoop, and Spark. Get started on your journey today!

Why Is Hypothesis Testing Important in Research Methodology?

Hypothesis testing is crucial in research methodology for several reasons:

  • Provides evidence-based conclusions: It allows researchers to make objective conclusions based on empirical data, providing evidence to support or refute their research hypotheses.
  • Supports decision-making: It helps make informed decisions, such as accepting or rejecting a new treatment, implementing policy changes, or adopting new practices.
  • Adds rigor and validity: It adds scientific rigor to research using statistical methods to analyze data, ensuring that conclusions are based on sound statistical evidence.
  • Contributes to the advancement of knowledge: By testing hypotheses, researchers contribute to the growth of knowledge in their respective fields by confirming existing theories or discovering new patterns and relationships.

When Did Hypothesis Testing Begin?

Hypothesis testing as a formalized process began in the early 20th century, primarily through the work of statisticians such as Ronald A. Fisher, Jerzy Neyman, and Egon Pearson. The development of hypothesis testing is closely tied to the evolution of statistical methods during this period.

  • Ronald A. Fisher (1920s): Fisher was one of the key figures in developing the foundation for modern statistical science. In the 1920s, he introduced the concept of the null hypothesis in his book "Statistical Methods for Research Workers" (1925). Fisher also developed significance testing to examine the likelihood of observing the collected data if the null hypothesis were true. He introduced p-values to determine the significance of the observed results.
  • Neyman-Pearson Framework (1930s): Jerzy Neyman and Egon Pearson built on Fisher’s work and formalized the process of hypothesis testing even further. In the 1930s, they introduced the concepts of Type I and Type II errors and developed a decision-making framework widely used in hypothesis testing today. Their approach emphasized the balance between these errors and introduced the concepts of the power of a test and the alternative hypothesis.

The dialogue between Fisher's and Neyman-Pearson's approaches shaped the methods and philosophy of statistical hypothesis testing used today. Fisher emphasized the evidential interpretation of the p-value. At the same time, Neyman and Pearson advocated for a decision-theoretical approach in which hypotheses are either accepted or rejected based on pre-determined significance levels and power considerations.

The application and methodology of hypothesis testing have since become a cornerstone of statistical analysis across various scientific disciplines, marking a significant statistical development.

Limitations of Hypothesis Testing

Hypothesis testing has some limitations that researchers should be aware of:

  • It cannot prove or establish the truth: Hypothesis testing provides evidence to support or reject a hypothesis, but it cannot confirm the absolute truth of the research question.
  • Results are sample-specific: Hypothesis testing is based on analyzing a sample from a population, and the conclusions drawn are specific to that particular sample.
  • Possible errors: During hypothesis testing, there is a chance of committing type I error (rejecting a true null hypothesis) or type II error (failing to reject a false null hypothesis).
  • Assumptions and requirements: Different tests have specific assumptions and requirements that must be met to accurately interpret results.

Learn All The Tricks Of The BI Trade

Learn All The Tricks Of The BI Trade

After reading this tutorial, you would have a much better understanding of hypothesis testing, one of the most important concepts in the field of Data Science . The majority of hypotheses are based on speculation about observed behavior, natural phenomena, or established theories.

If you are interested in statistics of data science and skills needed for such a career, you ought to explore the Post Graduate Program in Data Science.

If you have any questions regarding this ‘Hypothesis Testing In Statistics’ tutorial, do share them in the comment section. Our subject matter expert will respond to your queries. Happy learning!

1. What is hypothesis testing in statistics with example?

Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence. An example: testing if a new drug improves patient recovery (Ha) compared to the standard treatment (H0) based on collected patient data.

2. What is H0 and H1 in statistics?

In statistics, H0​ and H1​ represent the null and alternative hypotheses. The null hypothesis, H0​, is the default assumption that no effect or difference exists between groups or conditions. The alternative hypothesis, H1​, is the competing claim suggesting an effect or a difference. Statistical tests determine whether to reject the null hypothesis in favor of the alternative hypothesis based on the data.

3. What is a simple hypothesis with an example?

A simple hypothesis is a specific statement predicting a single relationship between two variables. It posits a direct and uncomplicated outcome. For example, a simple hypothesis might state, "Increased sunlight exposure increases the growth rate of sunflowers." Here, the hypothesis suggests a direct relationship between the amount of sunlight (independent variable) and the growth rate of sunflowers (dependent variable), with no additional variables considered.

4. What are the 2 types of hypothesis testing?

  • One-tailed (or one-sided) test: Tests for the significance of an effect in only one direction, either positive or negative.
  • Two-tailed (or two-sided) test: Tests for the significance of an effect in both directions, allowing for the possibility of a positive or negative effect.

The choice between one-tailed and two-tailed tests depends on the specific research question and the directionality of the expected effect.

5. What are the 3 major types of hypothesis?

The three major types of hypotheses are:

  • Null Hypothesis (H0): Represents the default assumption, stating that there is no significant effect or relationship in the data.
  • Alternative Hypothesis (Ha): Contradicts the null hypothesis and proposes a specific effect or relationship that researchers want to investigate.
  • Nondirectional Hypothesis: An alternative hypothesis that doesn't specify the direction of the effect, leaving it open for both positive and negative possibilities.

Find our PL-300 Microsoft Power BI Certification Training Online Classroom training classes in top cities:

NameDatePlace
24 Aug -8 Sep 2024,
Weekend batch
Your City
7 Sep -22 Sep 2024,
Weekend batch
Your City
21 Sep -6 Oct 2024,
Weekend batch
Your City

About the Author

Avijeet Biswal

Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.

Recommended Resources

The Key Differences Between Z-Test Vs. T-Test

Free eBook: Top Programming Languages For A Data Scientist

Normality Test in Minitab: Minitab with Statistics

Normality Test in Minitab: Minitab with Statistics

A Comprehensive Look at Percentile in Statistics

Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

logo image missing

  • > Machine Learning
  • > Statistics

What is Hypothesis Testing? Types and Methods

  • Soumyaa Rawat
  • Jul 23, 2021

What is Hypothesis Testing? Types and Methods title banner

Hypothesis Testing  

Hypothesis testing is the act of testing a hypothesis or a supposition in relation to a statistical parameter. Analysts implement hypothesis testing in order to test if a hypothesis is plausible or not. 

In data science and statistics , hypothesis testing is an important step as it involves the verification of an assumption that could help develop a statistical parameter. For instance, a researcher establishes a hypothesis assuming that the average of all odd numbers is an even number. 

In order to find the plausibility of this hypothesis, the researcher will have to test the hypothesis using hypothesis testing methods. Unlike a hypothesis that is ‘supposed’ to stand true on the basis of little or no evidence, hypothesis testing is required to have plausible evidence in order to establish that a statistical hypothesis is true. 

Perhaps this is where statistics play an important role. A number of components are involved in this process. But before understanding the process involved in hypothesis testing in research methodology, we shall first understand the types of hypotheses that are involved in the process. Let us get started! 

Types of Hypotheses

In data sampling, different types of hypothesis are involved in finding whether the tested samples test positive for a hypothesis or not. In this segment, we shall discover the different types of hypotheses and understand the role they play in hypothesis testing.

Alternative Hypothesis

Alternative Hypothesis (H1) or the research hypothesis states that there is a relationship between two variables (where one variable affects the other). The alternative hypothesis is the main driving force for hypothesis testing. 

It implies that the two variables are related to each other and the relationship that exists between them is not due to chance or coincidence. 

When the process of hypothesis testing is carried out, the alternative hypothesis is the main subject of the testing process. The analyst intends to test the alternative hypothesis and verifies its plausibility.

Null Hypothesis

The Null Hypothesis (H0) aims to nullify the alternative hypothesis by implying that there exists no relation between two variables in statistics. It states that the effect of one variable on the other is solely due to chance and no empirical cause lies behind it. 

The null hypothesis is established alongside the alternative hypothesis and is recognized as important as the latter. In hypothesis testing, the null hypothesis has a major role to play as it influences the testing against the alternative hypothesis. 

(Must read: What is ANOVA test? )

Non-Directional Hypothesis

The Non-directional hypothesis states that the relation between two variables has no direction. 

Simply put, it asserts that there exists a relation between two variables, but does not recognize the direction of effect, whether variable A affects variable B or vice versa. 

Directional Hypothesis

The Directional hypothesis, on the other hand, asserts the direction of effect of the relationship that exists between two variables. 

Herein, the hypothesis clearly states that variable A affects variable B, or vice versa. 

Statistical Hypothesis

A statistical hypothesis is a hypothesis that can be verified to be plausible on the basis of statistics. 

By using data sampling and statistical knowledge, one can determine the plausibility of a statistical hypothesis and find out if it stands true or not. 

(Related blog: z-test vs t-test )

Performing Hypothesis Testing  

Now that we have understood the types of hypotheses and the role they play in hypothesis testing, let us now move on to understand the process in a better manner. 

In hypothesis testing, a researcher is first required to establish two hypotheses - alternative hypothesis and null hypothesis in order to begin with the procedure. 

To establish these two hypotheses, one is required to study data samples, find a plausible pattern among the samples, and pen down a statistical hypothesis that they wish to test. 

A random population of samples can be drawn, to begin with hypothesis testing. Among the two hypotheses, alternative and null, only one can be verified to be true. Perhaps the presence of both hypotheses is required to make the process successful. 

At the end of the hypothesis testing procedure, either of the hypotheses will be rejected and the other one will be supported. Even though one of the two hypotheses turns out to be true, no hypothesis can ever be verified 100%. 

(Read also: Types of data sampling techniques )

Therefore, a hypothesis can only be supported based on the statistical samples and verified data. Here is a step-by-step guide for hypothesis testing.

Establish the hypotheses

First things first, one is required to establish two hypotheses - alternative and null, that will set the foundation for hypothesis testing. 

These hypotheses initiate the testing process that involves the researcher working on data samples in order to either support the alternative hypothesis or the null hypothesis. 

Generate a testing plan

Once the hypotheses have been formulated, it is now time to generate a testing plan. A testing plan or an analysis plan involves the accumulation of data samples, determining which statistic is to be considered and laying out the sample size. 

All these factors are very important while one is working on hypothesis testing.

Analyze data samples

As soon as a testing plan is ready, it is time to move on to the analysis part. Analysis of data samples involves configuring statistical values of samples, drawing them together, and deriving a pattern out of these samples. 

While analyzing the data samples, a researcher needs to determine a set of things -

Significance Level - The level of significance in hypothesis testing indicates if a statistical result could have significance if the null hypothesis stands to be true.

Testing Method - The testing method involves a type of sampling-distribution and a test statistic that leads to hypothesis testing. There are a number of testing methods that can assist in the analysis of data samples. 

Test statistic - Test statistic is a numerical summary of a data set that can be used to perform hypothesis testing.

P-value - The P-value interpretation is the probability of finding a sample statistic to be as extreme as the test statistic, indicating the plausibility of the null hypothesis. 

Infer the results

The analysis of data samples leads to the inference of results that establishes whether the alternative hypothesis stands true or not. When the P-value is less than the significance level, the null hypothesis is rejected and the alternative hypothesis turns out to be plausible. 

Methods of Hypothesis Testing

As we have already looked into different aspects of hypothesis testing, we shall now look into the different methods of hypothesis testing. All in all, there are 2 most common types of hypothesis testing methods. They are as follows -

Frequentist Hypothesis Testing

The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data. 

The supposed truths and assumptions are based on the current data and a set of 2 hypotheses are formulated. A very popular subtype of the frequentist approach is the Null Hypothesis Significance Testing (NHST). 

The NHST approach (involving the null and alternative hypothesis) has been one of the most sought-after methods of hypothesis testing in the field of statistics ever since its inception in the mid-1950s. 

Bayesian Hypothesis Testing

A much unconventional and modern method of hypothesis testing, the Bayesian Hypothesis Testing claims to test a particular hypothesis in accordance with the past data samples, known as prior probability, and current data that lead to the plausibility of a hypothesis. 

The result obtained indicates the posterior probability of the hypothesis. In this method, the researcher relies on ‘prior probability and posterior probability’ to conduct hypothesis testing on hand. 

On the basis of this prior probability, the Bayesian approach tests a hypothesis to be true or false. The Bayes factor, a major component of this method, indicates the likelihood ratio among the null hypothesis and the alternative hypothesis. 

The Bayes factor is the indicator of the plausibility of either of the two hypotheses that are established for hypothesis testing.  

(Also read - Introduction to Bayesian Statistics ) 

To conclude, hypothesis testing, a way to verify the plausibility of a supposed assumption can be done through different methods - the Bayesian approach or the Frequentist approach. 

Although the Bayesian approach relies on the prior probability of data samples, the frequentist approach assumes without a probability. A number of elements involved in hypothesis testing are - significance level, p-level, test statistic, and method of hypothesis testing. 

(Also read: Introduction to probability distributions )

A significant way to determine whether a hypothesis stands true or not is to verify the data samples and identify the plausible hypothesis among the null hypothesis and alternative hypothesis. 

Share Blog :

what hypothesis testing definition

Be a part of our Instagram community

Trending blogs

5 Factors Influencing Consumer Behavior

Elasticity of Demand and its Types

An Overview of Descriptive Analysis

What is PESTLE Analysis? Everything you need to know about it

What is Managerial Economics? Definition, Types, Nature, Principles, and Scope

5 Factors Affecting the Price Elasticity of Demand (PED)

6 Major Branches of Artificial Intelligence (AI)

Scope of Managerial Economics

Dijkstra’s Algorithm: The Shortest Path Algorithm

Different Types of Research Methods

Latest Comments

what hypothesis testing definition

Talk to our experts

1800-120-456-456

  • Hypothesis Testing

ffImage

What is Hypothesis Testing?

Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.  

Any hypothetical statement we make may or may not be valid, and it is then our responsibility to provide evidence for its possibility. To approach any hypothesis, we follow these four simple steps that test its validity.

First, we formulate two hypothetical statements such that only one of them is true. By doing so, we can check the validity of our own hypothesis.

The next step is to formulate the statistical analysis to be followed based upon the data points.

Then we analyze the given data using our methodology.

The final step is to analyze the result and judge whether the null hypothesis will be rejected or is true.

Let’s look at several hypothesis testing examples:

It is observed that the average recovery time for a knee-surgery patient is 8 weeks. A physician believes that after successful knee surgery if the patient goes for physical therapy twice a week rather than thrice a week, the recovery period will be longer. Conduct hypothesis for this statement. 

David is a ten-year-old who finishes a 25-yard freestyle in the meantime of 16.43 seconds. David’s father bought goggles for his son, believing that it would help him to reduce his time. He then recorded a total of fifteen 25-yard freestyle for David, and the average time came out to be 16 seconds. Conduct a hypothesis.

A tire company claims their A-segment of tires have a running life of 50,000 miles before they need to be replaced, and previous studies show a standard deviation of 8,000 miles. After surveying a total of 28 tires, the mean run time came to be 46,500 miles with a standard deviation of 9800 miles. Is the claim made by the tire company consistent with the given data? Conduct hypothesis testing. 

All of the hypothesis testing examples are from real-life situations, which leads us to believe that hypothesis testing is a very practical topic indeed. It is an integral part of a researcher's study and is used in every research methodology in one way or another. 

Inferential statistics majorly deals with hypothesis testing. The research hypothesis states there is a relationship between the independent variable and dependent variable. Whereas the null hypothesis rejects this claim of any relationship between the two, our job as researchers or students is to check whether there is any relation between the two.  

Hypothesis Testing in Research Methodology

Now that we are clear about what hypothesis testing is? Let's look at the use of hypothesis testing in research methodology. Hypothesis testing is at the centre of research projects. 

What is Hypothesis Testing and Why is it Important in Research Methodology?

Often after formulating research statements, the validity of those statements need to be verified. Hypothesis testing offers a statistical approach to the researcher about the theoretical assumptions he/she made. It can be understood as quantitative results for a qualitative problem. 

(Image will be uploaded soon)

Hypothesis testing provides various techniques to test the hypothesis statement depending upon the variable and the data points. It finds its use in almost every field of research while answering statements such as whether this new medicine will work, a new testing method is appropriate, or if the outcomes of a random experiment are probable or not.

Procedure of Hypothesis Testing

To find the validity of any statement, we have to strictly follow the stepwise procedure of hypothesis testing. After stating the initial hypothesis, we have to re-write them in the form of a null and alternate hypothesis. The alternate hypothesis predicts a relationship between the variables, whereas the null hypothesis predicts no relationship between the variables.

After writing them as H 0 (null hypothesis) and H a (Alternate hypothesis), only one of the statements can be true. For example, taking the hypothesis that, on average, men are taller than women, we write the statements as:

H 0 : On average, men are not taller than women.

H a : On average, men are taller than women. 

Our next aim is to collect sample data, what we call sampling, in a way so that we can test our hypothesis. Your data should come from the concerned population for which you want to make a hypothesis. 

What is the p value in hypothesis testing? P-value gives us information about the probability of occurrence of results as extreme as observed results.

You will obtain your p-value after choosing the hypothesis testing method, which will be the guiding factor in rejecting the hypothesis. Usually, the p-value cutoff for rejecting the null hypothesis is 0.05. So anything below that, you will reject the null hypothesis. 

A low p-value means that the between-group variance is large enough that there is almost no overlapping, and it is unlikely that these came about by chance. A high p-value suggests there is a high within-group variance and low between-group variance, and any difference in the measure is due to chance only.

What is statistical hypothesis testing?

When forming conclusions through research, two sorts of errors are common: A hypothesis must be set and defined in statistics during a statistical survey or research. A statistical hypothesis is what it is called. It is, in fact, a population parameter assumption. However, it is unmistakable that this idea is always proven correct. Hypothesis testing refers to the predetermined formal procedures used by statisticians to determine whether hypotheses should be accepted or rejected. The process of selecting hypotheses for a given probability distribution based on observable data is known as hypothesis testing. Hypothesis testing is a fundamental and crucial issue in statistics. 

Why do I Need to Test it? Why not just prove an alternate one?

The quick answer is that you must as a scientist; it is part of the scientific process. Science employs a variety of methods to test or reject theories, ensuring that any new hypothesis is free of errors. One protection to ensure your research is not incorrect is to include both a null and an alternate hypothesis. The scientific community considers not incorporating the null hypothesis in your research to be poor practice. You are almost certainly setting yourself up for failure if you set out to prove another theory without first examining it. At the very least, your experiment will not be considered seriously.

Types of Hypothesis Testing

There are several types of hypothesis testing, and they are used based on the data provided. Depending on the sample size and the data given, we choose among different hypothesis testing methodologies. Here starts the use of hypothesis testing tools in research methodology.

Normality- This type of testing is used for normal distribution in a population sample. If the data points are grouped around the mean, the probability of them being above or below the mean is equally likely. Its shape resembles a bell curve that is equally distributed on either side of the mean.

T-test- This test is used when the sample size in a normally distributed population is comparatively small, and the standard deviation is unknown. Usually, if the sample size drops below 30, we use a T-test to find the confidence intervals of the population. 

Chi-Square Test- The Chi-Square test is used to test the population variance against the known or assumed value of the population variance. It is also a better choice to test the goodness of fit of a distribution of data. The two most common Chi-Square tests are the Chi-Square test of independence and the chi-square test of variance.

ANOVA- Analysis of Variance or ANOVA compares the data sets of two different populations or samples. It is similar in its use to the t-test or the Z-test, but it allows us to compare more than two sample means. ANOVA allows us to test the significance between an independent variable and a dependent variable, namely X and Y, respectively.

Z-test- It is a statistical measure to test that the means of two population samples are different when their variance is known. For a Z-test, the population is assumed to be normally distributed. A z-test is better suited in the case of large sample sizes greater than 30. This is due to the central limit theorem that as the sample size increases, the samples are considered to be distributed normally. 

arrow-right

FAQs on Hypothesis Testing

1. Mention the types of hypothesis Tests.

There are two types of a hypothesis tests:

Null Hypothesis: It is denoted as H₀.

Alternative Hypothesis: IT is denoted as H₁ or Hₐ.

2. What are the two errors that can be found while performing the null Hypothesis test?

While performing the null hypothesis test there is a possibility of occurring two types of errors,

Type-1: The type-1 error is denoted by (α), it is also known as the significance level. It is the rejection of the true null hypothesis. It is the error of commission.

Type-2: The type-2 error is denoted by (β). (1 - β) is known as the power test. The false null hypothesis is not rejected. It is the error of the omission. 

3. What is the p-value in hypothesis testing?

During hypothetical testing in statistics, the p-value indicates the probability of obtaining the result as extreme as observed results. A smaller p-value provides evidence to accept the alternate hypothesis. The p-value is used as a rejection point that provides the smallest level of significance at which the null hypothesis is rejected. Often p-value is calculated using the p-value tables by calculating the deviation between the observed value and the chosen reference value. 

It may also be calculated mathematically by performing integrals on all the values that fall under the curve and areas far from the reference value as the observed value relative to the total area of the curve. The p-value determines the evidence to reject the null hypothesis in hypothesis testing.

4. What is a null hypothesis?

The null hypothesis in statistics says that there is no certain difference between the population. It serves as a conjecture proposing no difference, whereas the alternate hypothesis says there is a difference. When we perform hypothesis testing, we have to state the null hypothesis and alternative hypotheses such that only one of them is ever true. 

By determining the p-value, we calculate whether the null hypothesis is to be rejected or not. If the difference between groups is low, it is merely by chance, and the null hypothesis, which states that there is no difference among groups, is true. Therefore, we have no evidence to reject the null hypothesis.

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

what hypothesis testing definition

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Creating Brand Value
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

A Beginner’s Guide to Hypothesis Testing in Business

Business professionals performing hypothesis testing

  • 30 Mar 2021

Becoming a more data-driven decision-maker can bring several benefits to your organization, enabling you to identify new opportunities to pursue and threats to abate. Rather than allowing subjective thinking to guide your business strategy, backing your decisions with data can empower your company to become more innovative and, ultimately, profitable.

If you’re new to data-driven decision-making, you might be wondering how data translates into business strategy. The answer lies in generating a hypothesis and verifying or rejecting it based on what various forms of data tell you.

Below is a look at hypothesis testing and the role it plays in helping businesses become more data-driven.

Access your free e-book today.

What Is Hypothesis Testing?

To understand what hypothesis testing is, it’s important first to understand what a hypothesis is.

A hypothesis or hypothesis statement seeks to explain why something has happened, or what might happen, under certain conditions. It can also be used to understand how different variables relate to each other. Hypotheses are often written as if-then statements; for example, “If this happens, then this will happen.”

Hypothesis testing , then, is a statistical means of testing an assumption stated in a hypothesis. While the specific methodology leveraged depends on the nature of the hypothesis and data available, hypothesis testing typically uses sample data to extrapolate insights about a larger population.

Hypothesis Testing in Business

When it comes to data-driven decision-making, there’s a certain amount of risk that can mislead a professional. This could be due to flawed thinking or observations, incomplete or inaccurate data , or the presence of unknown variables. The danger in this is that, if major strategic decisions are made based on flawed insights, it can lead to wasted resources, missed opportunities, and catastrophic outcomes.

The real value of hypothesis testing in business is that it allows professionals to test their theories and assumptions before putting them into action. This essentially allows an organization to verify its analysis is correct before committing resources to implement a broader strategy.

As one example, consider a company that wishes to launch a new marketing campaign to revitalize sales during a slow period. Doing so could be an incredibly expensive endeavor, depending on the campaign’s size and complexity. The company, therefore, may wish to test the campaign on a smaller scale to understand how it will perform.

In this example, the hypothesis that’s being tested would fall along the lines of: “If the company launches a new marketing campaign, then it will translate into an increase in sales.” It may even be possible to quantify how much of a lift in sales the company expects to see from the effort. Pending the results of the pilot campaign, the business would then know whether it makes sense to roll it out more broadly.

Related: 9 Fundamental Data Science Skills for Business Professionals

Key Considerations for Hypothesis Testing

1. alternative hypothesis and null hypothesis.

In hypothesis testing, the hypothesis that’s being tested is known as the alternative hypothesis . Often, it’s expressed as a correlation or statistical relationship between variables. The null hypothesis , on the other hand, is a statement that’s meant to show there’s no statistical relationship between the variables being tested. It’s typically the exact opposite of whatever is stated in the alternative hypothesis.

For example, consider a company’s leadership team that historically and reliably sees $12 million in monthly revenue. They want to understand if reducing the price of their services will attract more customers and, in turn, increase revenue.

In this case, the alternative hypothesis may take the form of a statement such as: “If we reduce the price of our flagship service by five percent, then we’ll see an increase in sales and realize revenues greater than $12 million in the next month.”

The null hypothesis, on the other hand, would indicate that revenues wouldn’t increase from the base of $12 million, or might even decrease.

Check out the video below about the difference between an alternative and a null hypothesis, and subscribe to our YouTube channel for more explainer content.

2. Significance Level and P-Value

Statistically speaking, if you were to run the same scenario 100 times, you’d likely receive somewhat different results each time. If you were to plot these results in a distribution plot, you’d see the most likely outcome is at the tallest point in the graph, with less likely outcomes falling to the right and left of that point.

distribution plot graph

With this in mind, imagine you’ve completed your hypothesis test and have your results, which indicate there may be a correlation between the variables you were testing. To understand your results' significance, you’ll need to identify a p-value for the test, which helps note how confident you are in the test results.

In statistics, the p-value depicts the probability that, assuming the null hypothesis is correct, you might still observe results that are at least as extreme as the results of your hypothesis test. The smaller the p-value, the more likely the alternative hypothesis is correct, and the greater the significance of your results.

3. One-Sided vs. Two-Sided Testing

When it’s time to test your hypothesis, it’s important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests , or one-tailed and two-tailed tests, respectively.

Typically, you’d leverage a one-sided test when you have a strong conviction about the direction of change you expect to see due to your hypothesis test. You’d leverage a two-sided test when you’re less confident in the direction of change.

Business Analytics | Become a data-driven leader | Learn More

4. Sampling

To perform hypothesis testing in the first place, you need to collect a sample of data to be analyzed. Depending on the question you’re seeking to answer or investigate, you might collect samples through surveys, observational studies, or experiments.

A survey involves asking a series of questions to a random population sample and recording self-reported responses.

Observational studies involve a researcher observing a sample population and collecting data as it occurs naturally, without intervention.

Finally, an experiment involves dividing a sample into multiple groups, one of which acts as the control group. For each non-control group, the variable being studied is manipulated to determine how the data collected differs from that of the control group.

A Beginner's Guide to Data and Analytics | Access Your Free E-Book | Download Now

Learn How to Perform Hypothesis Testing

Hypothesis testing is a complex process involving different moving pieces that can allow an organization to effectively leverage its data and inform strategic decisions.

If you’re interested in better understanding hypothesis testing and the role it can play within your organization, one option is to complete a course that focuses on the process. Doing so can lay the statistical and analytical foundation you need to succeed.

Do you want to learn more about hypothesis testing? Explore Business Analytics —one of our online business essentials courses —and download our Beginner’s Guide to Data & Analytics .

what hypothesis testing definition

About the Author

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Understanding Hypothesis Testing

Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.

What is Hypothesis Testing?

A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.

Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. 

To test the validity of the claim or assumption about the population parameter:

  • A sample is drawn from the population and analyzed.
  • The results of the analysis are used to decide whether the claim is true or not.
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.

Defining Hypotheses

  • Null hypothesis (H 0 ): In statistics, the null hypothesis is a general statement or default position that there is no relationship between two measured cases or no relationship among groups. In other words, it is a basic assumption or made based on the problem knowledge. Example : A company’s mean production is 50 units/per da H 0 : [Tex]\mu [/Tex] = 50.
  • Alternative hypothesis (H 1 ): The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis.  Example: A company’s production is not equal to 50 units/per day i.e. H 1 : [Tex]\mu [/Tex] [Tex]\ne [/Tex] 50.

Key Terms of Hypothesis Testing

  • Level of significance : It refers to the degree of significance in which we accept or reject the null hypothesis. 100% accuracy is not possible for accepting a hypothesis, so we, therefore, select a level of significance that is usually 5%. This is normally denoted with  [Tex]\alpha[/Tex] and generally, it is 0.05 or 5%, which means your output should be 95% confident to give a similar kind of result in each sample.
  • P-value: The P value , or calculated probability, is the probability of finding the observed/extreme results when the null hypothesis(H0) of a study-given problem is true. If your P-value is less than the chosen significance level then you reject the null hypothesis i.e. accept that your sample claims to support the alternative hypothesis.
  • Test Statistic: The test statistic is a numerical value calculated from sample data during a hypothesis test, used to determine whether to reject the null hypothesis. It is compared to a critical value or p-value to make decisions about the statistical significance of the observed results.
  • Critical value : The critical value in statistics is a threshold or cutoff point used to determine whether to reject the null hypothesis in a hypothesis test.
  • Degrees of freedom: Degrees of freedom are associated with the variability or freedom one has in estimating a parameter. The degrees of freedom are related to the sample size and determine the shape.

Why do we use Hypothesis Testing?

Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing. 

One-Tailed and Two-Tailed Test

One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.

One-Tailed Test

There are two types of one-tailed test:

  • Left-Tailed (Left-Sided) Test: The alternative hypothesis asserts that the true parameter value is less than the null hypothesis. Example: H 0 ​: [Tex]\mu \geq 50 [/Tex] and H 1 : [Tex]\mu < 50 [/Tex]
  • Right-Tailed (Right-Sided) Test : The alternative hypothesis asserts that the true parameter value is greater than the null hypothesis. Example: H 0 : [Tex]\mu \leq50 [/Tex] and H 1 : [Tex]\mu > 50 [/Tex]

Two-Tailed Test

A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.

Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]

To delve deeper into differences into both types of test: Refer to link

What are Type 1 and Type 2 errors in Hypothesis Testing?

In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.

  • Type I error: When we reject the null hypothesis, although that hypothesis was true. Type I error is denoted by alpha( [Tex]\alpha [/Tex] ).
  • Type II errors : When we accept the null hypothesis, but it is false. Type II errors are denoted by beta( [Tex]\beta [/Tex] ).


Null Hypothesis is True

Null Hypothesis is False

Null Hypothesis is True (Accept)

Correct Decision

Type II Error (False Negative)

Alternative Hypothesis is True (Reject)

Type I Error (False Positive)

Correct Decision

How does Hypothesis Testing work?

Step 1: define null and alternative hypothesis.

State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ​), suggesting an effect or difference.

We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.

Step 2 – Choose significance level

Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.

Step 3 – Collect and Analyze data.

Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.

Step 4-Calculate Test Statistic

The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.

There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.

  • Z-test : If population means and standard deviations are known. Z-statistic is commonly used.
  • t-test : If population standard deviations are unknown. and sample size is small than t-test statistic is more appropriate.
  • Chi-square test : Chi-square test is used for categorical data or for testing independence in contingency tables
  • F-test : F-test is often used in analysis of variance (ANOVA) to compare variances or test the equality of means across multiple groups.

We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.

T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.

Step 5 – Comparing Test Statistic:

In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.

Method A: Using Crtical values

Comparing the test statistic and tabulated critical value we have,

  • If Test Statistic>Critical Value: Reject the null hypothesis.
  • If Test Statistic≤Critical Value: Fail to reject the null hypothesis.

Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Method B: Using P-values

We can also come to an conclusion using the p-value,

  • If the p-value is less than or equal to the significance level i.e. ( [Tex]p\leq\alpha [/Tex] ), you reject the null hypothesis. This indicates that the observed results are unlikely to have occurred by chance alone, providing evidence in favor of the alternative hypothesis.
  • If the p-value is greater than the significance level i.e. ( [Tex]p\geq \alpha[/Tex] ), you fail to reject the null hypothesis. This suggests that the observed results are consistent with what would be expected under the null hypothesis.

Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Step 7- Interpret the Results

At last, we can conclude our experiment using method A or B.

Calculating test statistic

To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .

1. Z-statistics:

When population means and standard deviations are known.

[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]

  • [Tex]\bar{x} [/Tex] is the sample mean,
  • μ represents the population mean, 
  • σ is the standard deviation
  • and n is the size of the sample.

2. T-Statistics

T test is used when n<30,

t-statistic calculation is given by:

[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]

  • t = t-score,
  • x̄ = sample mean
  • μ = population mean,
  • s = standard deviation of the sample,
  • n = sample size

3. Chi-Square Test

Chi-Square Test for Independence categorical Data (Non-normally distributed) using:

[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]

  • [Tex]O_{ij}[/Tex] is the observed frequency in cell [Tex]{ij} [/Tex]
  • i,j are the rows and columns index respectively.
  • [Tex]E_{ij}[/Tex] is the expected frequency in cell [Tex]{ij}[/Tex] , calculated as : [Tex]\frac{{\text{{Row total}} \times \text{{Column total}}}}{{\text{{Total observations}}}}[/Tex]

Real life Examples of Hypothesis Testing

Let’s examine hypothesis testing using two real life situations,

Case A: D oes a New Drug Affect Blood Pressure?

Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.

  • Before Treatment: 120, 122, 118, 130, 125, 128, 115, 121, 123, 119
  • After Treatment: 115, 120, 112, 128, 122, 125, 110, 117, 119, 114

Step 1 : Define the Hypothesis

  • Null Hypothesis : (H 0 )The new drug has no effect on blood pressure.
  • Alternate Hypothesis : (H 1 )The new drug has an effect on blood pressure.

Step 2: Define the Significance level

Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.

If the evidence suggests less than a 5% chance of observing the results due to random variation.

Step 3 : Compute the test statistic

Using paired T-test analyze the data to obtain a test statistic and a p-value.

The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.

t = m/(s/√n)

  • m  = mean of the difference i.e X after, X before
  • s  = standard deviation of the difference (d) i.e d i ​= X after, i ​− X before,
  • n  = sample size,

then, m= -3.9, s= 1.8 and n= 10

we, calculate the , T-statistic = -9 based on the formula for paired t test

Step 4: Find the p-value

The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.

thus, p-value = 8.538051223166285e-06

Step 5: Result

  • If the p-value is less than or equal to 0.05, the researchers reject the null hypothesis.
  • If the p-value is greater than 0.05, they fail to reject the null hypothesis.

Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

Python Implementation of Case A

Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.

Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.

We will implement our first real life problem via python,

import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )

T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05. 

  • The results suggest that the new drug, treatment, or intervention has a significant effect on lowering blood pressure.
  • The negative T-statistic indicates that the mean blood pressure after treatment is significantly lower than the assumed population mean before treatment.

Case B : Cholesterol level in a population

Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.

Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.

Populations Mean = 200

Population Standard Deviation (σ): 5 mg/dL(given for this problem)

Step 1: Define the Hypothesis

  • Null Hypothesis (H 0 ): The average cholesterol level in a population is 200 mg/dL.
  • Alternate Hypothesis (H 1 ): The average cholesterol level in a population is different from 200 mg/dL.

As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.

The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] ​ and we get accordingly , Z =2.039999999999992.

Step 4: Result

Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL

Python Implementation of Case B

import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )

Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.

Limitations of Hypothesis Testing

  • Although a useful technique, hypothesis testing does not offer a comprehensive grasp of the topic being studied. Without fully reflecting the intricacy or whole context of the phenomena, it concentrates on certain hypotheses and statistical significance.
  • The accuracy of hypothesis testing results is contingent on the quality of available data and the appropriateness of statistical methods used. Inaccurate data or poorly formulated hypotheses can lead to incorrect conclusions.
  • Relying solely on hypothesis testing may cause analysts to overlook significant patterns or relationships in the data that are not captured by the specific hypotheses being tested. This limitation underscores the importance of complimenting hypothesis testing with other analytical approaches.

Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.

Frequently Asked Questions (FAQs)

1. what are the 3 types of hypothesis test.

There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.

2.What are the 4 components of hypothesis testing?

Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.

3.What is hypothesis testing in ML?

Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.

4.What is the difference between Pytest and hypothesis in Python?

Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.

Please Login to comment...

Similar reads.

  • data-science

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 10 August 2024

A new look at the CO 2 haven hypothesis using gravity model European Union and China

  • Somayeh Avazdahandeh 1  

Scientific Reports volume  14 , Article number:  18610 ( 2024 ) Cite this article

Metrics details

  • Environmental economics
  • Environmental sciences
  • Environmental social sciences

The pollution haven hypothesis (PHH) is defined as follows: a reduction in trade costs results in production of pollution-intensive goods shifting towards countries with easier environmental laws. The previous studies examined this hypothesis in the form of Kuznets' environmental hypothesis. In this way, they test the effect of foreign direct investment (FDI) on carbon emissions. However, this study investigates PHH from a new perspective. I will use Newton's gravity model to test this hypothesis. The basis of PHH is the difference in the environmental standards of the two business partners. One of the indicators used to measure the severity of a country's environmental laws is carbon emission intensity. The stricter the country's laws are, the lower the index value will be. In order to test the hypothesis, experimental data from China and OECD countries are used. China was as the pollution haven for the countries of the Organization for Economic Cooperation and Development. I found that environmental laws of host and guest countries have different effects on FDI. In addition, transportation costs have a negative effect on the FDI flow. Finally, the research results confirm the hypothesis on gravity model.

Similar content being viewed by others

what hypothesis testing definition

Navigating the nexus: unraveling technological innovation, economic growth, trade openness, ICT, and CO2 emissions through symmetric and asymmetric analysis

what hypothesis testing definition

The impact of intellectual property demonstration policies on carbon emission efficiency

what hypothesis testing definition

The spatial spillover effect of ICT development level on regional CO2 emissions

Introduction.

The discussion on link between environment and trade started in 1970’s. This debate become serious in 1990’s when trade openness was expanded by different organizations like North American Free Trade Agreement (NAFTA). Copeland and Taylor 1 first introduced the PHH in the context of North–South trade under NAFTA. It was the first article that links the environmental rules severity and trade models with the level of pollution in a country 2 . They proved in the first and second propositions that the higher income country chooses harder environmental safekeeping, and specializes in relatively clean commodities 1 . These two propositions are actually the pollution haven hypothesis. As stated by the PHH, the movement of the unclean industries from advanced to developing countries happen by way of the trade of commodities and foreign direct investment (FDI) 2 . Two important factors are the basis of the pollution haven hypothesis. The first is foreign investment, and the second is environmental laws. The first factor, foreign direct investment (FDI) is an impartible item of an open and effective international economic process and an important catalyst to development 3 . FDI flow is an increase in the book value of the net worth of investments in one country held by investors of another country, where the investments are under the managerial control of the investors 4 . Developing and emerging economies have come passing through increasingly to see FDI as a factor of economic development and advance, income growth and employment 3 . The mutual connection between trade and FDI is an important feature of globalization. Empirical study shows that, until the mid-1980s, international trade generated direct investment. After this era, the cause-and-effect relationship has converted and direct investment has a huge impact on international trade 3 . In the today's economies, trade has an increasingly important effect in shaping economic and social performance and prospects of countries around the world, especially those of developing countries 5 .

The second factor was environmental laws (EL). Generally, international trade has two consequences on the environment. First, trade can improve environmental quality by exporting clean technologies from developed countries to developing countries. In fact, trade can improve environmental quality 6 , 7 . Second, international relations can increase pollution. More developed countries pay more attention to environmental standards. In these countries, with the increase in economic growth, people demand a higher quality of the environment. It is the opposite in less developed countries. They substitute higher economic growth for environmental quality. According to the Kuznets environmental curve, developed countries are located after the turning point, while less developed countries are located before this point. In other words, more developed countries have stricter environmental laws than less developed countries. If we want to have a definition of environmental laws: environmental rules is a series of policies or standards adopted by the government to maintain the environment 8 . Environmental regulations have effectively restricted the damages of enterprises by the environment and had an important role in protecting the environment 8 , 9 .

Paying more or less attention to environmental standards reminds us of the pollution haven hypothesis. The pollution haven hypothesis, which emerged in the 1990s, pivots on the relocation of polluting manufacturing from developed countries with hard environmental laws to developing countries with soft rules 10 . The pollution haven hypothesis tells that easy environmental rules in developing countries encourage investment in emission-intensive industries from developed countries, especially in the context of increasing numbers of countries committing to carbon neutrality before 2050 11 .

From the PHH standpoint, the stringent environmental rules in developed countries lead to relocate of the polluting industries from developed to developing countries and cause pollution to rise in developing countries 2 . According to the definition of PHP, what causes the movement of FDI between two countries is the severity of environmental laws. In other words, their environmental regulations determine the amount of FDI of two countries. As we mentioned earlier, PHP is Copeland and Talor 1 first and second presentation in their article. Therefore, the purpose of this article is to model the pollution haven hypothesis accurately. By carefully investigating this hypothesis and previous studies (Table 1 ), we found that there are two important gaps: first, none of them has included the two main factors of the hypothesis (FDI and EL) in their modeling simultaneously. Second, they assumed that FDI has an effect on EL. That is, they considered the FDI variable as independent. While FDI should be considered as dependent variable. Therefore, this research will make future researchers have a more accurate view of PHP. They can examine the effects of various social, economic, environmental and political variables in this new model. The framework presented by this paper allows researchers to better understand the distinction between the two environmental Kuznets and the pollution haven hypotheses.

So far, many researchers have investigated the pollution haven hypothesis. In these studies, different indicators have been used for the first factor of PHH. Some of them directly applied FDI, for example Usama and Tang 12 ; Solarin et al. 13 ; Benzerrouk et al. 7 ; Shijie et al. 14 ; Temurlenk and Lögün 15 ; Yilanci et al. 16 ; Ali Nagvi et al. 17 ; Chirilus and Costea 18 ; Campos‑Romero et al. 19 ; Liu et al. 20 ; Soto and Edeh 21 ; Ozcelik et al. 22 . Some researchers also used polluting goods and activities as proxies, here are some of them: Shen et al. 23 ; Sadik-Zada and Ferrari 24 ; Zhang and Wang 25 ; Bhat and Tantr 10 ; Moise 26 ; Hamaguchi 27 . In all studies, the first PHP factor is considered as an independent variable. These studies are briefly mentioned in Table 1 .

By reviewing previous research, the innovation of this study is the methodology section. The research methodology is based on Newton's gravity model. I will present the pollution haven hypothesis in terms of Newton's gravity model. This model is widely used in trade research. There are always two partners in business discussions. One country will be the importer and the other the exporter. The pollution haven hypothesis is also one of the business categories. The importer will become a pollution haven for the exporting country. In fact, the host country will become a trading partner's haven for the investment of polluting industries.

I selected the host country based on the pollution emission and foreign direct investment data in 2020 (Figs. 1 , 2 ).

figure 1

( a ) Ten countries with the most CO 2 (kt) emissions in 2020, ( b ) ten countries with the most share of CO 2 emissions (percentage) in 2020.

figure 2

( a ) Ten countries with the highest foreign direct investment (FDI) net inflows in 2020, ( b ) ten countries with the highest share of foreign direct investment (FDI) inflows in 2020.

Countries have different levels of CO 2 emissions based on their activities. As Fig.  1 a shows, China had the highest CO 2 emissions in 2020 (about 13 million kilotons). On the other hand, China's share of the world's total emission is more than 29% (Fig.  1 b). United States is next with a share of 12%. According to Fig.  1 b, China's share is two and a half times that of United States.

Foreign direct investment is an important indicator to determine the pollution haven. We reviewed the FDI countries of the world in 2020. The results of the investigation are shown in Fig.  2 a,b. Figure  2 a indicates that China has the highest FDI inflows in 2020. China had 2.5 thousand billion FDI net inflows in 2020 (Billions of United States dollars). According to Fig.  2 b, China's FDI share in 2020 was 21%.

After introduction, the results section is discussed in detail. The research findings were divided into three categories: 1. the validity of the pollution haven hypothesis. 2. The effect of control variables on foreign direct investment. 3. The effect of main or independent variables on foreign direct investment. Generally, results showed that the severity of environmental laws has different effects on FDI flow. Since the direction of FDI flow from OECD countries is to China, increasing the severity of the environmental regulations of the guest countries will increase FDI. On the other hand, more environmental laws of China (the host) reduce the flow of FDI.

Results and discussion

This section presents and discusses the main findings of the empirical analysis. In this research, I investigated pollution haven hypothesis based on gravity model approach. The variables that were collected included CO 2 emission (World Bank), GDP (World Bank), trade costs (World Trade Organization), FDI inflows from OECD to China (Organization for Economic Co-operation and Development data), urbanization is represented by the number of individuals living in cities (World Bank), trade openness, whose calculation formula is as: X + M/GDP, where X = Exports; M = Imports) (World Bank) and share of manufacturing (World Bank). Table 2 indicates the measurement unit of the variables and their sources.

Because data has two dimensions (cross-section and period), I used the F-Limer test to determine whether the data is a panel. The null hypothesis was rejected based on the pool model and the model with panel data was accepted, Therefore, I used panel regression. Then, Hausman test was used to test the type effects (random or fixed). The result showed that there was a random effect for both cross-section and period. In the following, I investigated stationary of the variables to prevent spurious regression. Levin, Lin and Chu test, examined four variables. The result showed that variables are stationary at level (intercept and trend).

Finally, I estimated the model based on panel data. The regression results has been shown in Table 3 . FDI ijt is the dependent variable. ER it , ER jt , TC 2 ijt , \({\text{lnUR}}_{\text{jt}}\) , \({\text{lnTO}}_{\text{jt}}\) and \({\text{lnShM}}_{\text{jt}}\) are independent variables. Table 3 indicates that all the coefficients are significant at the level of 5% and R 2 was 0.79. R 2 shows that the independent variables were able to explain 79% of the changes in the dependent variable.

The coefficient for \({lnER}_{it}\) is − 0.54. The negative coefficient sign shows that with increasing \({ER}_{it}\) , FDI ijt will decrease. In other words, when host countries' environmental regulations become easier, FDI flows from host countries (OECD) to host countries (China) will decrease. The coefficient for \({lnER}_{jt}\) is 0.90. The positive sign indicates that if \({ER}_{jt}\) increases, FDI ijt also increase. The results related to the effect of environmental laws on trade were similar to the previous studies. For example, in Shen et al. 23 study, the coefficient sign for environmental regulation was positive. Sadik-Zada and Ferrari 24 indicated that environmental policy stringency has a positive effect on carbon trade. Bhat and Tantr 10 concluded that environmental policy has a positive effect on pollution-intensive exports. The coefficient for transportation costs was obtained − 0.11. In fact, with the increase in transportation costs, the FDI flow from i (guest country) to j (host country) decreases. The \({\text{lnTC}}_{\text{ijt}}^{2}\) coefficient sign in present study is consistent with the following studies: Nuroğlu and Kunst 28 ; Wang et al. 29 ; Golovko and Sahin 30 ; Wani and Yasmin 31 . Among the control variables, the urbanization coefficient was not significant. The coefficient for the TO and ShM was positive and significant. The coefficient for the TO was positive. It means that as TO increases, FDI also increases. Benzerrouk et al. 7 indicated that an increase in trade and FDI increases the developed countries’ polluting projects which are destined for the developing countries. Moise 26 showed that trade openness statistically and significantly increase CO2 emission. In addition, the coefficient value for the ShM was estimated positive. This coefficient states that if the share of manufacturing in GNP increases, foreign direct investment will increase. Shijie et al. 14 concluded the positive effect of FDI on the environment of dominant industrial agglomeration is increasing first and then decreasing. Sawhney and Rastogi 32 indicated that that the increase in trade liberalization, the growth of American industries and FDI has increased the emission of pollution in India.

Our model is in terms of logarithms, so the coefficients express elasticities. \({LnER}_{it}\) and \({ln ER}_{jt}\) coefficient were − 0.54 and 0.9, respectively. That is, if the environmental laws of guest country are tightened by 1%, FDI flow from the guest country to the host country will decrease by 54%. In addition, if environmental laws of the host country are relaxed by 1%, FDI flow from the guest country to the host country will increase by 90%. The \({lnTC}_{ijt}^{2}\) coefficient was − 0.11 that indicates with 1% increase in transportation costs, the FDI flow from the guest country to the host country decreases by 11%. The coefficient for the control variables \({lnTO}_{jt}\) and \({lnShM}_{jt}\) were 0.77 and 0.51, respectively. These coefficients state that if trade openness and share of manufacturing increase by 1%, foreign direct investment will increase by 71 and 55%. Thus, increasing trade openness reveals that lax environmental enforcement in developing countries attracts investment in emission-intensive industries from developed countries.

This paper fills a research gap by assessing pollution haven hypothesis based on its initial assumptions. In previous studies, this hypothesis was tested in the form of Kuznets curve. While the concept of pollution haven is due to differences in environmental laws. While the concept of pollution haven is foreign investment flow between the haven seeker and the haven giver. The main driver of which is the difference in environmental laws. When we talk about flow, the gravity model is the best option, for example: power flow, trade flow, labor flow, FDI flow (see Fig.  4 in the method section). Trade has permitted countries with more emission intensities to export goods or investment to countries with less emission intensities, which may result an increase in worldwide carbon emissions 11 . In this research, we examined foreign direct investment between OECD countries and China. In fact, we investigated the effect of environmental laws on FDI in the form of the pollution haven hypothesis. The indicator chosen for the environmental regulations was emission intensity. Figure  3 shows carbon emission intensity of guest (OECD) and host (China) countries in 2016–2020 (kt/10 billion $). Figure  3 (1–9) is for OECD countries and Fig.  3 (10) is for China. As Fig.  3 indicates, emission intensity has been decreasing in all selected countries in 2016–2020.

figure 3

Carbon emission intensity of guest (OECD) and host (China) countries in 2016–2020. (1–9) is for OECD countries. (10) is also for China. Carbon emission intensity in kilotons per 10 billion dollars.

The emission intensity is in range (782–5439) for OECD countries, while it is in (5510–6308) kilotons per 10 billion dollars for China. The maximum emission intensity of OECD countries is lower than the minimum emission intensity of China. It means that the environmental rules of guest countries are stricter than China. Therefore, the pollution haven hypothesis discloses that weak environmental implement in developing countries absorbs investment in emission-intensive industries from developed countries 11 . The purpose of this paper is to model the pollution haven hypothesis in the form of gravity model. For this purpose, the effect of environmental laws of host and guest countries on FDI is investigated. The results showed that the severity of environmental laws has different effects on FDI flow. Since the direction of FDI flow from OECD countries is to China, increasing the severity of the environmental regulations of the guest countries will increase FDI. On the other hand, more environmental laws of China (the host) reduce the flow of FDI. After presenting the results and comparing them with previous studies, the results should be tested for robustness. The main empirical findings are robust to two different methods of multicollinearity tests (i) Cross-correlation across variables (ii) Variance inflation factor (VIF) of each variable 33 . VIF is a measure of the amount of multicollinearity in regression model. Multicollinearity exists when there is a correlation between multiple independent variables.

The variance inflation factor is calculated as follows:

where \({R}_{i}^{2}\) is the variance explained by the regression model (i is counter of explanatory variable). On the other hand, \({R}_{i}^{2}\) represents the regression of the predictor of interest on the remaining predictors. The VIF values cannot be less than 1.0, since 1.0 represents the ideal situation of no correlation with other predictors. Also implied is that VIF cannot be negative. The minimum VIF can be is 1.0. A VIF of 1.0 can only occur when \({R}_{i}^{2}\) is equal to 0, which implies that the given predictor has zero linear relationship with other predictors in the model. Tolerance is simply the reciprocal of VIF and is thus computed as

whereas large values of VIF are undesirable, large tolerances are preferable to smaller ones. It stands as well that the maximum value of tolerance must be 1.0 34 . As shown in Table 4 , the mean variance inflation factor (VIF) values in our model is equal to 2.34. The maximum VIF amount of explanatory variables is 4.62, which lie within the acceptable standard.

In this study, the pollution haven hypothesis was investigated from a new perspective. In fact, this hypothesis was formulated based on theoretical foundations. Copeland and Taylor 1 first introduced the PHH. It was the first article that links the environmental rules severity and trade models with the level of pollution in a country. According to Copeland and Taylor 1 's article, two factors of foreign direct investment and environmental laws constitute this hypothesis. In their study, it is stated that the environmental laws of countries determine the attraction of foreign direct investment. Therefore, we considered FDI between two countries as a function of their environmental laws. To achieve the research objectives, the commercial gravity model was used. The research innovation is first, none of authors has included the two main factors of the hypothesis (FDI and EL) in their modeling simultaneously. Second, they assumed that FDI has an effect on EL. That is, they considered the FDI variable as independent. While FDI should be considered as dependent variable. Therefore, this research will make future researchers have a more accurate view of PHP. The results showed that if environmental laws of guest country are tightened by 1%, FDI flow from the guest country to the host country will decrease by 54%. In addition, if environmental laws of the host country are relaxed by 1%, FDI flow from the guest country to the host country will increase by 90%. The \({lnTC}_{ijt}^{2}\) coefficient was − 0.11 that indicates with 1% increase in transportation costs, the FDI flow from the guest country to the host country decreases by 11%. The coefficient for the control variables \({lnTO}_{jt}\) and \({lnShM}_{jt}\) were 0.77 and 0.51, respectively. These coefficients state that if trade openness and share of manufacturing increase by 1%, foreign direct investment will increase by 71 and 55%. After presenting the results and comparing them with previous studies, the results should be tested for robustness. The mean VIF values in our model is equal to 2.34. The maximum VIF amount of explanatory variables is 4.62, which lie within the acceptable standard.

Based on the results of the research, suggestions are provided. But rigidity in environmental law could lead to reduce in FDI. On the other hand, FDI is a key factor for economic growth and development; hence, FDI can be transferred from polluting sectors to clean sectors, such as service sectors, labor-intensive industries or renewable energy sectors, and green technology investment, should be encouraged. The manufacturing sector is the largest contributor to global emissions when direct and indirect emissions are included. The key transformations needed to bring the industry sector towards environmentally friendly goals. These aims can include electrifying industry, transform production processes, using new fuels, accelerating material efficiency and scaling up energy efficiency everywhere, and promote circular material flow. Openness trade, like industry growth, increases FDI. Furthermore, to decrease the impact of trade openness and economic growth on environmental sustainability, it is very important to increase environmental friendly production system industries that could motivate green technology knowledge for all economic sectors. The receiving countries should improve their mechanism of absorption ability. The paper author has suggestions for future researchers. The regression model that was chosen is a linear model. Therefore, future authors can use other regression methods such as spatial regression and get results that are more accurate. Because one of the variables of the attraction model is the distance between countries. The dependent variable is FDI, which is also affected by key qualitative factors. In future studies, the effect of these qualitative variables can be investigated, such as government structure and investment management factors.

This section contains information about the empirical model. The empirical model is borrowed from common literature on the gravity model. In economic sciences, the gravity model forecasts bilateral trade flows based on measure of the economies (usually using GDP) and distance between the two locations 33 as in Eq. ( 3 ):

In above equation, the gravitational power (G or amount of trade between regions) is positively proportional to the size of the regions ( si and sj ) and negatively proportional to the distance between region (i) and region (j) ( di , j ). In this research, the components of the gravity model are different. In fact, the innovation of this study compared to previous researches is the different components of the gravity model. Figure  4 shows the innovation of this model with previous gravity models.

figure 4

Newton's gravity model, Trade’s gravity model, FDI’s gravity model.

Two important factors in the pollution haven hypothesis are foreign direct investment between two countries and the severity of the countries' environmental laws. Differences in attention to the environmental quality coupled with trade liberalization may cause to the creation of pollution havens, with polluting activity relocating to areas with weak regulation 35 , 36 . If we consider gravity as FDI. What causes the attraction of foreign direct investment between two countries is the strictness of the countries in implementing environmental regulations. So Eq. ( 3 ) is modified as follows:

where i is country (OECD) i , j is country j (China) and t is time (2016–2020). FDI represents foreign direct investment flow among countries. ER is the severity of environmental laws. Co 2 is pollution emissions. The UR, TO and ShM are urbanization, trade openness and share of manufacturing. The TC denotes the trade costs between countries and GDP is gross domestic product. From Eq. ( 2 ), data is converted into log terms according to traditional methods in econometrics. The equation is established as follows (Eq.  6 ):

Many studies used FDI in their model, such as Usama and Tang 12 ; Solarin et al. 13 ; Benzerrouk et al. 7 ; Temurlenk and Lögün 15 ; Yilanci et al. 16 ; Ali Nagvi et al. 17 . Of course, they considered FDI as an independent variable. But in this study, it is considered as a dependent variable. In Eq. ( 6 ), \(ER\) is the pollution emission intensity, which is calculated in Eq. ( 5 ). In previous studies, several indicators have been used to measure environmental regulations. For example, Guo et al. 37 takes pollutant discharge fee and total investment in pollution controlling to represent environmental rules. Sun et al. 38 applied number of pollution enterprises for environmental rules. Nie et al. 39 make ISO14001 environmental management system certification. Xie et al. 40 used to fail to form fixed assets or form fixed assets for environmental regulation. Sadik-Zada and Ferrari 24 make used environmental policy stringency index as a proxy for PHH.

I measure these rules with the pollution intensity index like Cole and Elliott 41 's study, the proportion of pollution emissions in industrial output value can be used as a proxy for measuring environmental regulations 8 , 41 . In some studies for example Shen et al. 23 ; Bhat and Tantr 10 , emission intensity has been used as the degree of severity environmental regulations. TC also shows transportation costs, which is a proxy for the distance between countries. If we use the distance variable in the model, it would cause collinearity. This study includes annual data of 35 countries from OECD (Fig.  5 ) and China for 2016–2020. The reason for choosing these countries and period is as follows: first, China is the largest emitter of greenhouse gases in recent years. Second, China is the largest importer of foreign investment in recent years. Third, OECD countries were selected because exact information about their FDI inflow to China was available.

figure 5

Selected countries from among the OECD countries that make foreign direct investment in China.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Copeland, B. R. & Taylor, M. S. North-South trade and the environment. Q. J. Econ. 109 , 755–787 (1994).

Article   Google Scholar  

Gill, F. L., Viswanathan, K. K. & Abdul Karim, M. Z. The critical review of the pollution haven hypothesis. Int. J. Energy Econ. Policy 8 (1), 167–174 (2018).

Google Scholar  

OECD (Organization for Economic Co- Operation and Development). Foreign direct investment for development maximizing benefits, minimizing costs overview. https://www.oecd.org/investment/investmentfordevelopment/1959815.pdf (2002).

IMF (International Monetary Fund). Foreign direct investment in the world economy. file:///C:/Users/Hpe/Downloads/9781557754998-ch07.pdf (2000).

United Nations. Developing countries in international trade and development index. United Nations conference on trade and development. https://unctad.org/system/files/official-document/ditctab20051_en.pdf (2005).

Wang, Q. & Zhang, F. Free trade and renewable energy: A cross-income levels empirical investigation using two trade openness measures. Renew. Energy 168 , 1027–1039 (2021).

Benzerrouk, Z., Abid, M. & Sekrafi, H. Pollution haven or halo effect? A comparative analysis of developing and developed countries. Energy Rep. 7 , 4862–4871. https://doi.org/10.1016/j.egyr.2021.07.076 (2021).

Meng, F., Xu, Y. & Zhao, G. Environmental regulations, green innovation and intelligent upgrading of manufacturing enterprises: Evidence from China. Sci. Rep. 10 , 14485. https://doi.org/10.1038/s41598-020-71423-x (2020).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Jahanshahi, A. A. & Brem, A. Antecedents of corporate environmental commitments: The role of customers. Int. J. Environ. Res. Public Health 15 , 1191 (2018).

Bhat, V. & Tantr, M. L. Pollution haven hypothesis and the bilateral trade between India and China. J. Curr. Chin. Affairs 1 , 1–26. https://doi.org/10.1177/18681026231188450 (2023).

Meng, J. et al. The narrowing gap in developed and developing country emission intensities reduces global trade’s carbon leakage. Nat. Commun. 14 , 3775. https://doi.org/10.1038/s41467-023-39449-7 (2023).

Usama, A. M. & Tang, C. F. Investigating the validity of pollution haven hypothesis in the gulf cooperation council (GCC) countries. Energy Policy 60 , 813–819. https://doi.org/10.1016/j.enpol.2013.05.055 (2013).

Solarin, S. A., Al-Mulali, U., Musah, I. & Ozturk, I. Investigating the pollution haven hypothesis in Ghana: An empirical investigation. Energy 124 , 706–719. https://doi.org/10.1016/j.energy.2017.02.089 (2017).

Shijie, L., Hou, D., Jin, W. & Shahid, R. Impact of industrial agglomeration on environmental pollution from perspective of foreign direct investment—a panel threshold analysis for Chinese provinces. Environ. Sci. Pollut. Res. 28 (41), 58592–58605. https://doi.org/10.1007/s11356-021-14823-4 (2021).

Temurlenk, M. S. & Lögün, A. An analysis of the pollution haven hypothesis in the context of Turkey: A nonlinear approach. Econ. Bus. Rev. 8 (22), 5–23. https://doi.org/10.18559/ebr.2022.1.2 (2022).

Yilanci, V., Cutcu, I., Cayir, B. & Saglam, M. S. Pollution haven or pollution halo in the fishing footprint: Evidence from Indonesia. Mar. Pollut. Bull. 188 , 114626. https://doi.org/10.1016/j.marpolbul.2023.114626 (2023).

Article   PubMed   Google Scholar  

Ali Nagvi, S. A. et al. Environmental sustainability and biomass energy consumption through the lens of pollution Haven hypothesis and renewable energy-environmental Kuznets curve. Renew. Energy 212 , 621–631. https://doi.org/10.1016/j.renene.2023.04.127 (2023).

Chirilus, A. & Costea, A. The effect of FDI on environmental degradation in Romania: Testing the pollution haven hypothesis. Sustainability 15 , 10733. https://doi.org/10.3390/su151310733 (2023).

Campos-Romero, H., Mourao, P. R. & Rodil-Marzabal, O. Is there a pollution haven in European Union global value chain participation? Environment. Dev. Sustain. https://doi.org/10.1007/s10668-023-03563-9 (2023).

Liu, P., Rahman, Z. U., Joźwik, B. & Doğan, M. Determining the environmental effect of Chinese FDI on the Belt and Road countries CO2 emissions: an EKC-based assessment in the context of pollution haven and halo hypotheses. Environ. Sci. Europe 36 (48), 1–12. https://doi.org/10.1186/s12302-024-00866-0 (2024).

Soto, G. H. & Edeh, J. Assessing the foreign direct investment-load capacity factor relationship in Spain: Can FDI contribute to environmental quality?. Environ. Dev. Sustain. https://doi.org/10.1007/s10668-024-04680-9 (2024).

Ozcelik, O. et al. Testing the validity of pollution haven and pollution halo hypotheses in BRICMT countries by Fourier Bootstrap AARDL method and Fourier Bootstrap Toda-Yamamoto causality approach. Air Qual. Atmos. Health. https://doi.org/10.1007/s11869-024-01522-5 (2024).

Shen, J., Wang, S., Liu, W. & Chu, J. Does migration of pollution-intensive industries impact environmental efficiency? Evidence supporting “Pollution Haven Hypothesis”. J. Environ. Manag. 242 , 142–152. https://doi.org/10.1016/j.jenvman.2019.04.072 (2019).

Sadik-Zada, E. R. & Ferrari, M. Environmental policy stringency, technical progress and pollution haven hypothesis. Sustainability 12 , 3880. https://doi.org/10.3390/su12093880 (2020).

Zhang, K. & Wang, X. Pollution haven hypothesis of global CO2, SO2, NOx, evidence from 43 economies and 56 sectors. Int. J. Environ. Res. Public Health 18 , 6552. https://doi.org/10.3390/ijerph18126552 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Moise, M. L. Examining the agriculture-induced environment curve hypothesis and pollution haven hypothesis in Rwanda: The role of renewable energy. Moise Carbon Res. 2 (50), 1–14. https://doi.org/10.1007/s44246-023-00076-y (2023).

Article   ADS   Google Scholar  

Hamaguchi, Y. A water pollution haven hypothesis in a dynamic agglomeration model for fisheries resource management. Environ. Dev. Sustain. https://doi.org/10.1007/s10668-024-04788-y (2024).

Nuroğlu, E. & Kunst, R. M. Competing specifications of the gravity equation: A three-way model, bilateral interaction effects, or a dynamic gravity model with time-varying country effects?. Empirical Econ. 46 (2), 733–741. https://doi.org/10.1007/s00181-013-0696-3 (2013).

Wang, Z. et al. Pollution haven hypothesis of domestic trade in China: A perspective of SO2 emissions. Sci. Total Environ. 663 , 198–205. https://doi.org/10.1016/j.scitotenv.2019.01.287 (2019).

Article   ADS   PubMed   Google Scholar  

Golovko, A. & Sahin, H. Analysis of international trade integration of Eurasian countries: Gravity model approach. Euras. Econ. Rev. 11 (3), 519–548. https://doi.org/10.1007/s40822-021-00168-3 (2021).

Wani, S. H. & Yasmin, E. India’s trade with South and Central Asia: An application of institution-based augmented gravity model. Future Bus J. 9 , 77. https://doi.org/10.1186/s43093-023-00257-6 (2023).

Sawhney, A. & Rastogi, R. Is India specialising in polluting industries? Evidence from US-India bilateral trade. World Econ. 38 (2), 360–378 (2015).

Cantore, N & Cheng, C. F. C. International trade of environmental goods in gravity models. J. Environ. Manag. 223 , 1047–1060 (2018).

Denis, D. J.. Multiple linear regression. Applied univariate, bivariate, and multivariate statistics, 286–315. https://doi.org/10.1002/9781119583004.ch (2021).

Copeland, B. R. & Taylor, M. S. Trade, growth, and the environment. J. Econ. Lit. 42 , 7–71 (2004).

Taylor, M. S. Unbundling the pollution haven hypothesis. Adv. Econ. Anal. Policy 4 (2), 1–26 (Reprinted in The Economics of Pollution Havens, Don Fullerton (Ed.), Edward Elgar Publishing, 2006) (2004).

Guo, W., Dai, H. & Liu, X. Impact of different types of environmental regulation on employment scale: An analysis based on perspective of provincial heterogeneity. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-020-10428-5 (2020).

Sun, W., Yang, Q., Ni, Q. & Kim, Y. The impact of environmental regulation on employment: An empirical study of China’s Two Control Zone policy. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-019-05840-5 (2019).

Nie, G.-Q., Zhu, Y.-F. & Wu, W.-P. Impact of voluntary environmental regulation on green technological innovation: Evidence from Chinese manufacturing enterprises. Front. Energy Res. 10 , 889037. https://doi.org/10.3389/fenrg.2022.889037 (2022).

Xie, R., Yuan, Y. & Huang, J. Different types of environmental regulations and heterogeneous influence on “green” productivity: Evidence from China. Ecol. Econ. 132 , 104–112. https://doi.org/10.1016/j.ecolecon.2016.10.019 (2017).

Cole, M. A. & Elliott, R. J. Do environmental regulations influence trade patterns? Testing old and new trade theories. World Econ. 26 , 1163–1186 (2003).

Download references

Author information

Authors and affiliations.

Department of Agricultural Economics, Tarbiat Modares University, Tehran, Iran

Somayeh Avazdahandeh

You can also search for this author in PubMed   Google Scholar

Contributions

Somayeh Avazdahandeh: conceived and designed the analysis; collected the data; contributed data or analysis tools; performed the analysis; write the paper.

Corresponding author

Correspondence to Somayeh Avazdahandeh .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Avazdahandeh, S. A new look at the CO 2 haven hypothesis using gravity model European Union and China. Sci Rep 14 , 18610 (2024). https://doi.org/10.1038/s41598-024-69611-0

Download citation

Received : 30 January 2024

Accepted : 07 August 2024

Published : 10 August 2024

DOI : https://doi.org/10.1038/s41598-024-69611-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

what hypothesis testing definition

HYPOTHESIS AND THEORY article

From diagnosis to dialogue – reconsidering the dsm as a conversation piece in mental health care: a hypothesis and theory.

Lars Veldmeijer,,*

  • 1 Department of Psychiatry, Utrecht University Medical Center, Utrecht, Netherlands
  • 2 Digital Innovation in Health, NHL Stenden University of Applied Sciences, Leeuwarden, Netherlands
  • 3 Department of Research and Innovation, KieN VIP Mental Health Care Services, Leeuwarden, Netherlands
  • 4 Department of Child and Family Welfare, University of Groningen, Groningen, Netherlands

The Diagnostic and Statistical Manual of Mental Disorders, abbreviated as the DSM, is one of mental health care’s most commonly used classification systems. While the DSM has been successful in establishing a shared language for researching and communicating about mental distress, it has its limitations as an empirical compass. In the transformation of mental health care towards a system that is centered around shared decision-making, person-centered care, and personal recovery, the DSM is problematic as it promotes the disengagement of people with mental distress and is primarily a tool developed for professionals to communicate about patients instead of with patients. However, the mental health care system is set up in such a way that we cannot do without the DSM for the time being. In this paper, we aimed to describe the position and role the DSM may have in a mental health care system that is evolving from a medical paradigm to a more self-contained profession in which there is increased accommodation of other perspectives. First, our analysis highlights the DSM’s potential as a boundary object in clinical practice, that could support a shared language between patients and professionals. Using the DSM as a conversation piece, a language accommodating diverse perspectives can be co-created. Second, we delve into why people with lived experience should be involved in co-designing spectra of distress. We propose an iterative design and test approach for designing DSM spectra of distress in co-creation with people with lived experience to prevent the development of ‘average solutions’ for ‘ordinary people’. We conclude that transforming mental health care by reconsidering the DSM as a boundary object and conversation piece between activity systems could be a step in the right direction, shifting the power balance towards shared ownership in a participation era that fosters dialogue instead of diagnosis.

1 Introduction

The Diagnostic Statistical Manual of Mental Disorders (DSM) has great authority in practice. The manual, released by the American Psychiatric Association (APA), provides a common language and a classification system for clinicians to communicate about people’s experiences of mental distress and for researchers to study social phenomena that include mental distress and its subsequent treatments. Before the DSM was developed, a plethora of mental health-related documents circulated in the United States ( 1 ). In response to the confusion that arose from this diversity of documents, the APA Committee on Nomenclature and Statistics standardized these into one manual, the DSM-I ( 2 ). In this first edition of the manual, released in 1952, mental distress was understood as a reaction to stress caused by psychological and interpersonal factors in the person’s life ( 3 ). Although the DSM-I had limited impact on practice ( 4 ), it did set the stage for increasingly standardized categorization of mental disorders ( 5 ).

The DSM-II was released in 1968. In this second iteration, mental disorders were understood as the patient’s attempts to control overwhelming anxiety with unconscious, intrapsychic conflicts ( 3 ). In this edition, the developers attempted to describe the symptoms of disorders and define their etiologies. They had chosen to base them predominantly on psychodynamic psychiatry but also included the biological focus of Kraepelin’s system of classification ( 5 , 6 ). During the development of the DSM-III, the task force added the goal to improve the reliability — the likelihood that different professionals arrive at the same diagnosis — of psychiatric diagnosis, which now became an important feature of the design process. The developers abandoned the psychodynamic view and shifted the focus to atheoretical descriptions, aiming to specify objective criteria for diagnosing mental disorders ( 3 ). Although it was explicitly stated in DSM-III that there was no underlying assumption that the categories were validated entities ( 7 ), the categorical approach still assumed each pattern of symptoms in a category reflected an underlying pathology. The definition of ‘mental illness’ was thereby altered from what one did or was (“you react anxious/you are anxious”) to something one had (“you have anxiety”). This resulted in descriptive, criteria-based classifications that reflected a perceived need for standardization of psychiatric diagnoses ( 5 , 6 ). The DSM-III was released in 1980 and had a big impact on practice ( 6 ) as it inaugurated an attempt to “re-medicalize” American psychiatry ( 5 ).

In hindsight, it is not surprising that after the release of the DSM-III, the funding for psychopharmacological research skyrocketed ( 8 ). At the same time, the debate on the relationship between etiology and description in psychiatric diagnosis continued ( 9 ). As sociologist Andrew Scull ( 10 ) showed, the election of President Reagan prompted a shift towards a focus on biology. His successor, President Bush, claimed that the 1990s were ‘the decade of the brain,’ which fueled a sharp increase in funding for research on genetics and neuroscience ( 10 ). Despite the public push for biological research, the DSM-IV aimed to arrive at a purely atheoretical description of psychiatric diagnostic criteria and was released in 1994 ( 11 ). The task force conducted multi-center field trials to relate diagnoses to clinical practice to improve reliability, which remained a goal of the design process ( 12 ). While the DSM-IV aimed to be atheoretical, researchers argued that the underlying ontologies were easily deducible from their content: psychological and social causality were eliminated and replaced implicitly with biological causality ( 13 ). In the DSM-5, validity — whether a coherent syndrome is being measured and whether it is what it is assumed to be — took center stage ( 10 ). The definition of mental disorder in the DSM-5 was thereby conceptualized as:

“… a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning.” ( 14 ).

With the release of the DSM-5, the debate surrounding the conceptualization of mental distress started all over again, but this can be best seen as re-energizing longstanding debates around the utility and validity of APA nosology ( 15 ). Three important design goals from the DSM-III until current editions can be observed: providing an international language on mental distress, developing a reliable classification system, and creating a valid classification system.

1.1 The limitations of the DSM as an empirical compass

The extent to which these three design goals were attained is only partial. The development of an international language has been accomplished, as the DSM (as well as the International Classification of Diseases) is now widely employed across most Western countries. Although merely based on consensus, the DSM enables — to an extent — professionals and researchers to quantify the prevalence of certain behaviors and find one or more classifications that best suit these observed behaviors. To this date, the expectation that diagnostic criteria would be empirically validated through research has not yet been fulfilled ( 10 , 16 , 17 ). As stated by the authors of the fourth edition ( 11 ), the disorders listed in the DSM are “valuable heuristic constructs” that serve a purpose in research and practice. However, it was already emphasized in the DSM-IV guidebook that they do not precisely depict nature as it is, being characterized as not “well-defined entities” ( 18 ). Furthermore, while the fifth edition refers to “syndromes,” it is again described that “there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or from no mental disorder” ( 14 ). Consequently, there are no laboratory tests or biological markers to set the boundary between ‘normal’ and ‘pathological,’ thus, it cannot confirm or reject the presumed pathologies underlying the DSM classifications, thereby rendering the validity goal of the design unattained. Therefore, the reliability of the current major DSM (i.e., DSM-5) still raises concerns ( 19 ).

By focusing conceptually on mental distress as an individual experience, the DSM task forces have neglected the role of social context, potentially restricting a comprehensive clinical understanding of mental distress ( 20 ). There is mounting evidence and increased attention, however, that the social environment, including its determinants and factors, is crucial for the onset, course, and outcome of mental distress ( 21 – 27 ). Moreover, exposure to factors such as early life adversity, poverty, unemployment, trauma, and minority group position is strongly associated with the onset of mental distress ( 28 , 29 ). It is also established that the range of ontological perspectives — what mental distress is and how it exists — is far broader than what is typically covered in prevailing scientific and educational discussions ( 30 ). These diverse perspectives are also evident in the epistemic pluralism among theoretical models on mental health problems ( 31 ).

1.2 The DSM is problematic in the transformation of mental health care

In the context of contemporary transformations in mental health care, the role of the DSM as an empirical instrument becomes even more problematic. In recent years, significant shifts have been witnessed in mental health care services, with a growing focus on promoting mental well-being, preventive measures, and person-centered and rights-based approaches ( 32 ). In contrast to the 1950s definition of health in which health was seen as the absence of disease, health today is defined as “the ability to adapt and to self-manage” ( 33 ), also known as ‘positive health.’ Furthermore, the recovery movement ( 34 ), person-centered care ( 35 ), and the integration of professionals’ lived experiences ( 36 ) all contributed to a more person-centered mental health care that promotes shared-decision making as a fundamental principle in practice in which no one perspective holds the wisdom. Shared decision-making is “an approach where clinicians and patients share the best available evidence when faced with the task of making decisions, and where patients are supported to consider options, to achieve informed preferences” ( 37 ). To realize and enable a more balanced relationship between professional and patient in shared decision-making, the interplay of healthcare professionals’ and patients’ skills, the support for a patient, and a good relationship between professional and patient are important to facilitate patients’ autonomy ( 38 ). Thus, mental health care professionals in the 21st century should collaborate, embrace ideography, and maximize effects mediated by therapeutic relationships and the healing effects of ritualized care interactions ( 39 ).

The DSM and its designed classifications, as well as their use in the community, can hinder a person-centered approach in which meaning is collaboratively derived for mental health issues, where a balanced relationship is needed, and where decisions are made together. We can demonstrate this with a brief example involving the ADHD classification and its criteria, highlighting how its design tends to marginalize individuals with mental distress, reducing their behavior to objectification from the clinician’s viewpoint. The ADHD classification delineates an ideal self that highly esteems disengagement from one’s feelings and needs, irrespective of contextual factors ( 40 ). This inclination is apparent in the criteria, including criterion 1a concerning inattention: “often avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort”. This indicates that disliking something is viewed as a symptom rather than a personal preference ( 40 ). Due to a lack of attention to the person’s meaning, a behavior that may be a preference of the individual can become a symptom of a disease. Another instance can be observed in criterion 2c: “often runs about or climbs in situations where it is inappropriate.” Although such behavior might be deemed inappropriate in certain contexts, many individuals derive enjoyment from running and climbing. In this way, ‘normal’ human behavior can be pathologized because there is no room for the meaning of the individual.

A parallel disengagement is evident in the DSM’s viewpoint on individuals with mental distress ( 40 ), as the diagnostic process appears to necessitate no interaction with an individual; instead, it fosters disengagement rather than engagement. For example, according to the DSM-5, when a child is “engaged in especially interesting activities,” the clinician is warned that the ‘symptoms’ may not manifest. Although it appears most fitting to assist the child by exploring their interests, clinicians are instead encouraged to seek situations the child finds uninteresting and assess whether the child can concentrate ( 40 ). If the child cannot concentrate, a ‘diagnosis’ might be made, and intervention can be initiated. This highlights that the design of the DSM promotes professionals to locate individual disorders in a person at face value without considering contextual factors, personal preferences, or other idiosyncrasies in a person’s present or history ( 41 ). It is also apparent that the term ‘symptom’ in the DSM implies an underlying entity as its cause, obscuring that it is a subjective criterion based on human assessment and interpretation ( 42 ). These factors make it difficult for the DSM in its current form to have a place in person-centered mental health care that promotes shared decision-making.

1.3 The problem and hypotheses

Diagnostic manuals like the DSM function similarly to standard operating procedures: they streamline decision-making and assist professionals in making approximate diagnoses when valid and specific measures are lacking or not readily accessible ( 43 ). However, the DSM is often (mis)used as a manual providing explanations for mental distress. This hinders a personalized approach that prioritizes the patient’s needs. Furthermore, this approach does not align with the principles of shared decision-making, as the best available evidence indicates that classifications are not explanations for mental distress. Also, disengagement is promoted in the design of the DSM, which is problematic in the person-centered transformation of mental health care in which a range of perspectives and human-centered interventions are needed. This paper aims to describe the position and role the DSM may have in a mental health care system that is evolving from a medical paradigm to a more self-contained profession in which there is increased accommodation of other perspectives. For this hypothesis and theory paper, we have formulated the following hypotheses:

(1) Reconsidering the DSM as a boundary object that can be used as a conversation piece allows for other perspectives on what is known about mental distress and aligns with the requirements of person-centered mental health care needed for shared decision-making;.

(2) Embracing design approaches in redesigning the DSM to a conversation piece that uses spectra of mental distress instead of classifications will stimulate the integration of diverse perspectives and voices in reshaping mental health care.

2 Co-creation of a real common language

The DSM originally aimed to develop a common language, and it has achieved that to some extent, but it now primarily serves as a common language among professionals. This does not align with the person-centered transformation in mental health care, where multiple perspectives come into play ( 32 , 44 ). In this section, we will address our first hypothesis: reconsidering the DSM as a boundary object that can be used as a conversation piece allows for other perspectives on what is known about mental distress and aligns with the requirements of person-centered mental health care needed for shared decision-making. First, we will examine several unintended consequences of classifications. After that, we propose considering the DSM as boundary objects to arrive at a real common language in which the perspective of people with lived experience is promoted. This perspective views the DSM as a conversation piece that can be used as a subject, the meaning of which can be attributed from various perspectives where the premise is that there is not an omniscient perspective.

2.1 Validation, stigma, and making up people

Classifications influence what we see or do not see, what is valorized, and what is silenced ( 45 ). DSM classifications and the process of getting them can provide validation and relief for some service users, while for others, it can be stigmatizing and distressing ( 46 , 47 ). The stigma people encounter can be worse than the mental problems themselves ( 48 ). The classification of people’s behaviors is not simply a passive reflection of pre-existing characteristics but is influenced by social and cultural factors. The evolution of neurasthenia serves as a fascinating illustration of the notable ontological changes in the design of the DSM, constantly reflecting and constructing reality. Initially, neurasthenia was considered a widespread mental disorder with presumed somatic roots. Still, it was subsequently discarded from use, only to resurface several decades later as a culture-bound manifestation of individual mental distress ( 49 ). Consequently, certain mental disorders, as depicted in the DSM, may not have existed in the same way as before the classifications were designed. This has been called ‘making up people’, which entails the argument that different kinds of human beings and human acts come into being hand in hand with our invention of the categories labeling them ( 50 ). Furthermore, it is important to consider that whether behavior is deemed dysfunctional or functional is always influenced by the prevailing norms and traditions within a specific society at a given time. Therefore, the individual meaning of the patient in its context is always more important than general descriptions and criteria of functional and dysfunctional behavior (i.e., ADHD climbing example).

Individuals might perceive themselves differently and develop emotions and behaviors partly due to the classifications imposed upon them. Over time, this can result in alterations to the classification itself, a phenomenon referred to as the classificatory looping effect ( 51 ). Moreover, when alterations are made to the world that align with the system’s depiction of reality, ‘blindness’ can occur ( 45 ). To illustrate, let’s consider an altered scenario of Bowker and Star ( 45 ) in which all mental distress is categorized solely based on physiological factors. In this context, medical frameworks for observation and treatment are designed to recognize physical manifestations of distress, such as symptoms, and the available treatments are limited to physical interventions, such as psychotropic medications. Consequently, in such a design, mental distress may solely be a consequence of a chemical imbalance in the brain, making it nearly inconceivable to consider alternative conceptualizations or solutions. Thus, task forces responsible for designing mental disorder classifications should be acutely aware that they actively contribute to the co-creation of reality with the classifications they construct upon reality ( 49 ).

2.2 Reification and disorderism

Another unintended consequence is the reification of classifications. Reification involves turning a broad and potentially diverse range of human experiences into a fixed and well-defined category. Take, for example, the case of the classification of ADHD and its reification mechanisms (i.e., language choice, logical fallacies, genetic reductionism, and textual silence) ( 42 ). Teachers sometimes promote the classification of ADHD as they believe it acknowledges a prior feeling that something is the matter with a pupil. The classification is then seen as a plausible explanation for the emergence of specific behaviors, academic underperformance, or deviations from the expected norm within a peer group ( 52 , 53 ). At first glance, this may seem harmless. However, it reinforces the notion that a complex and multifaceted set of contextual behaviors, experiences, and psychological phenomena are instead a discrete, objective entity residing in the individual. This is associated with presuppositions in the DSM that are not explicitly articulated, such as attributing a mental disorder to the individual rather than the system, resulting in healthcare that is organized around the individual instead of organized around the system ( 54 ).

In this way, DSM classifications can decontextualize mental distress, leading to ‘disorderism’. Disorderism is defined as the systemic decontextualization of mental distress by framing it in terms of individual disorders ( 55 ). The processes by which people are increasingly diagnosed and treated as having distinct treatable individual disorders, exemplified by the overdiagnosis of ADHD in children and adolescents ( 56 ), while at the same time, the services of psychiatry shape more areas of life, has been called the ‘psychiatrization of society’ ( 57 ). The psychiatrization of society encompasses a pervasive influence whereby the reification and disorderism extend beyond clinical settings and infiltrate various facets of daily life. It is a double-edged sword that fosters increased awareness of mental health issues and seeks to reduce stigma, but at the same time, raises concerns about the overemphasis on medical models, potentially neglecting the broader social, cultural, and environmental factors that contribute to individual well-being as well as population salutogenesis ( 58 ).

2.3 The DSM as a boundary object between activity systems in clinical practice

Instead of using the DSM as a scientific and professional tool in order to classify, the DSM can be reconsidered as a boundary object. When stakeholders with different objectives and needs have to work together constructively without making concessions, like patients and professionals in person-centered mental health care, objects can play a bridging role. Star and Griesemer ( 59 ) introduced the term boundary objects for this purpose.

“Boundary objects are objects that are plastic enough to adapt to the local needs and constraints of the different parties using them, yet robust enough to maintain a common identity in different locations. They are weakly structured in common use and become strongly structured in use in individual locations. They can be abstract or concrete. They have different meanings in different social worlds, but their structure is common enough to more than one world to make them recognizable, a means of translation.” ( 59 ).

Before exploring the benefits of a boundary object perspective for the DSM, it is important to note that it remains questionable whether the DSM in its current form can help establish a shared understanding or provide diagnostic, prognostic, or therapeutic value ( 60 – 63 ). To make the DSM more suitable for accommodating different perspectives and types of knowledge, the DSM task force can focus its redesign on leaving the discrete disease entities — which classifications imply — behind by creating spectra. This way of thinking has already found its way to the DSM-5, in which mental distress as a spectrum was introduced in the areas of autism, substance use, and nearly personality disorders, and following these reconceptualization, also a psychosis spectrum was proposed ( 43 ), but this proposition was eventually not adopted in the manual. As mental distress can be caused by an extensive range of factors and mechanisms that result from interactions in networks of behaviors and patterns that have complex dynamics that unfold over time ( 64 ), spectra of mental distress may be more suitable for conversations about an individual’s narrative and needs in clinical practice, as each experience of mental distress is unique and contextual.

If the DSM is reconsidered as a boundary object that is intended to provide a shared language for interpreting mental distress while addressing the unintended consequences of classifications, it is also essential to consider where this language now primarily manifests itself, how it relates to shared decision-making, and the significant role it plays for patients in the treatment process. In recent decades, the DSM has positioned itself primarily as a professional tool for clinical judgment (see Figure 1 ). In this way, professionals have more or less acquired a monopoly on the language of classifications and the associated behaviors and complaints described in the DSM. It provides professionals with a tool to pursue their professional objectives and legitimacy for their professional steps with patients, resulting in a lack of equality from which different perspectives can be examined side by side. However, with shared decision-making, patients are expected to be engaged and to help determine the course of treatment; the language surrounding classifications and symptoms does not currently allow that to happen sufficiently.

www.frontiersin.org

Figure 1 DSM as a professional tool, adapted from Figure 1, ‘Design of a Digital Comic Creator (It’s Me) to Facilitate Social Skills Training for Children With Autism Spectrum Disorder: Design Research Approach’, by Terlouw et al., CC-BY ( 65 ).

This is where boundary objects come into play. The focused shaping of boundary objects can ensure a more equal role for different stakeholders ( 65 – 67 ). Boundary objects can also trigger perspective-making and -taking from a reflective dialogical learning mechanism ( 68 – 70 ), which ensures a better shared understanding of all perspectives. Boundary objects and their dialogical learning mechanisms also align well with co-design ( 71 ). If we consider the DSM a boundary object, it positions itself between the activity system of the professionals, patients, and other people close to the patient ( Figure 2 ). The boundary between activity systems represents not only the cultural differences and potential challenges in actions and interactions but also the significant value in establishing communication and collaboration ( 71 ). All sides can give meaning to the DSM language from their perspective. By effectively considering the DSM as a boundary object, the DSM serves as a conversation piece—a product that elicits and provides room for questions and comments from other people, literally one that encourages conversation ( 72 ). As a conversation piece rather than a determinative classification system, it can contribute to mapping the meaning of complaints, behaviors, signs, and patterns for different invested parties. It also provides space for the patient’s contextual factors, subjective experience, needs, and life events, which are essential to giving constructive meaning to mental distress. This allows for interpretative flexibility; professionals can structure their work, while patients can give meaning to their subjective experience of mental distress.

www.frontiersin.org

Figure 2 DSM as a boundary object, adapted from Figure 1, ‘Design of a Digital Comic Creator (It’s Me) to Facilitate Social Skills Training for Children With Autism Spectrum Disorder: Design Research Approach’, by Terlouw et al., CC-BY ( 65 ).

As the DSM as a boundary object enables interpretative flexibility, it could then be used to enact conversations and develop a shared understanding in partnership between the patient and the professional; patients are no longer ‘diagnosed’ with a disorder from a professional point of view. It is important to note that the conceptual history of understanding the diagnostic process as essentially dialogical and not as a merely technical-quantitative procedure was already started in the early 1900s. For example, in the 1913 released ‘General Psychopathology,’ Karl Jaspers presented a phenomenological and comprehensive perspective for psychiatry with suggestions about how to understand the psychopathological phenomena as experienced by the patient through empathic understanding, allowing to understand the patient’s worldview and existential meanings ( 73 ). A century after its first publication, academics continue to leverage Jaspers’ ideas to critique modern operationalist epistemology ( 74 ). Following the notion of the diagnostic process as a dialogical one, the reconsideration of the DSM as a boundary object could accommodate the patient’s idiographic experience and the professional’s knowledge about mental distress by using these potential spectra as conversation pieces, shifting the power balance in clinical practice towards co-creation and dialogue. The spectra can then be explained as umbrella terms that indicate a collection of frequently occurring patterns and signs that can function as a starting point for a co-creative inquiry that promotes dialogue, aligning more with current empirical evidence of lived experience than using classifications as diagnoses.

Considering the advantages and strengths boundary objects bring to a mental health care system centered around shared decision-making and co-creation, the DSM could be a boundary object that is interpreted from various perspectives. Take, for example, altered perceptions, which is a characteristic commonly seen in people who receive a psychosis-related classification in clinical care. For some, these perceptions have person-specific meaning ( 75 , 76 ). By using the DSM as a boundary object and as a conversation piece, the patient and professional can give meaning by using the spectra in the manual as a starting point for a common language instead of using a classification to explain the distress. This requires a phenomenological and idiographic approach considering person-specific meaning and idiosyncrasies. Consequently, diagnostic practices should be iterative to align with the dynamic circumstances, with the individual’s narrative taking center stage in co-creation between professional and patient ( 41 , 49 ), as this reconsidered role fosters the engagement instead of the disengagement of patients. Additionally, the potential role of the DSM as a boundary object and conversation piece may also have a positive effect on societal and scientific levels, specifically on how mental distress is perceived and conceptualized. It can ‘systemically contextualize’ mental distress, which could eliminate the disorderism and the psychiatrization of society, and in the end, hopefully, contribute to population salutogenesis.

3 Co-design of DSM spectra of mental distress

If the DSM is reconsidered as a conversation piece in which spectra of mental distress replace classifications, it is important to address that these must be co-designed to accommodate diverse stakeholder perspectives and various types of knowledge side by side in clinical practice. Therefore, developers and designers need to embrace lived experience in the co-development of these spectra of mental distress to ensure patients’ engagement in clinical practice, as the patient effectively becomes a stakeholder of the DSM. This requires a different approach and procedure than DSM task forces used in past iterations. In this section, we will address our second hypothesis: embracing design approaches in redesigning the DSM to a conversation piece that uses spectra of mental distress instead of classifications will stimulate the integration of diverse perspectives and voices in reshaping mental health care. While we focus a little on the what (spectra of mental distress), we mainly focus on the how (the procedure that could be followed to arrive at the what). First, we will discuss the importance of lived experience leadership in design and research. Second, we argue that in the conceptual co-design of DSM spectra, lived experience leadership can be a way forward. Third, we take the stance that a designerly way of thinking and doing can shift the premature overcommitment task forces had to iterative exploration. In the concluding paragraph, we propose a design procedure that embraces engagement and iteration as core values for developing robust and flexible spectra of mental distress that are meaningful for service users and professionals.

3.1 Lived experience leadership and initiatives in design and research

First, let us briefly examine the evolution of lived experience in design and science over time to provide context for why engaging people with lived experience in the design of spectra of mental distress is important for innovation. Since 1960, people with lived experiences have tried to let their voices be heard, but initially to no avail, and their civil rights movement of reformist psychiatry was labeled as ‘anti-psychiatry’ ( 77 ). During the turn of the millennium, lived experience received increased recognition and eventually became an important pillar of knowledge that informed practice and continues to do so on various levels of mental health care ( 34 , 36 , 78 – 81 ). While there is currently growing attention to the perspective of lived experience in, for example, mental health research ( 79 , 80 , 82 , 83 ) and mental health care design and innovation ( 84 – 90 ), overall, their involvement remains too low in the majority of research and design projects ( 88 , 91 , 92 ). While there has been a significant increase in the annual publication of articles claiming to employ collaborative methods with people with lived experience, these studies often use vague terms to suggest a higher engagement level than is the case ( 93 ). This has led to initiatives such as that of The Lancet Psychiatry to facilitate transparent reporting of lived experience work ( 93 , 94 ).

Although the involvement of people with lived experience and its reporting needs attention in order to prevent tokenism and co-optation ( 89 ), some great user-driven initiatives resulted in innovative design and research that improved mental health care and exemplifies why their engagement should be mandatory. The Co-Design Living Labs is such an initiative. Its program exemplifies an adaptive and embedded approach for people with lived experiences of mental distress to drive mental health research design to translation ( 95 ). In this community-based approach, people with lived experience, their caregivers, family members, and support networks collaboratively drive research with university researchers, which is very innovative considering the relatively low engagement of people with lived experience in general mental health research. Another example is the development of person-specific tapering medication initiated by people with lived experience of withdrawal symptoms. People with lived experience began to devise practical methods to discontinue medications on their own safely because of the lack of a systematic and professional response to severe and persistent withdrawal. This resulted in the accumulation of experience-based knowledge about withdrawal, ultimately leading to co-creating what is now known as tapering strips ( 81 ). The development of these tapering strips shows that people with lived experience have novel experience-based ideas for design and research that can result in human-centered innovation. Both examples underline the importance of human-centered design in which people with lived experience and knowledge are taken seriously and why the participation era requires that individuals with lived experience are decision-makers from the project’s start to produce novel perspectives for innovative design and research ( 88 , 93 ).

3.2 The conceptual co-design of DSM spectra of mental distress and the potential of integrating lived experiences

Engaging people with lived experience of mental distress in redesigning the DSM towards a spectrum-based guideline is of special importance, albeit a more conceptual design task in comparison to the earlier examples. What mental distress is remains a fundamental philosophical and ontological question that should be addressed in partnership as it sits at the core of how mental health care is organized. To allow novel ontologies to reach their full potential and act as drivers of a landscape of promising innovative scientific and clinical approaches, investment is required in development and elaboration ( 30 ). This, as well as the epistemic pluralism among theoretical models on mental health problems ( 31 ), makes it evident there is currently not one coherent accepted explanation or consensus on what mental distress is and how it exists. Without clear etiological understanding, the most logical first step should be to involve people with lived experience of mental distress in the redevelopment of the DSM. Accounts from people with lived experience of mental distress are directly relevant to the design of the DSM, as they provide a more comprehensive and accurate understanding of mental distress and its treatment ( 96 ). Moreover, the DSM’s conceptualization as a major determinative classification system could be standing at the core of psychiatry’s “identity crisis”, where checklists of symptoms replaced thoughtful diagnoses despite after decades of brain research, no biomarker has been established for any disorders defined in the DSM ( 10 , 97 ).

Design approaches can help DSM task forces prioritize integrating lived experiences to co-create a framework that can accommodate a range of perspectives to make it viable as a conversation piece. As DSM classifications do not reflect reality ( 98 ), listening to people with firsthand experiences is necessary. The CHIME framework – a conceptual framework of people’s experiences of recovery – shows, for example, a clear need to diagnose not solely based on symptoms but also considering people’s stages in their journey of personal recovery ( 80 ). Further, bottom-up research shows that the lived experience perspective of psychosis can seem very different compared to conventional psychiatric conceptualizations ( 82 ). This is also the case for the lived experience of depression ( 99 ). Design approaches can ensure that such much-needed perspectives and voices are adhered to in developing meaningful innovations ( 88 ), which brings us back to the design of the DSM. Although the DSM aims to conceptualize the reality of mental distress, engaging people with experiences of living with mental distress has never been prioritized by the DSM task force as an important epistemic resource. This is evidenced by the historically low engagement of people with lived experiences and their contexts. For example, although “individuals with mental disorders and families of individuals with mental disorders” participated in providing feedback in the DSM-5 revisions process ( 14 ), when and how they were involved, what feedback they gave, and how this was incorporated are not described. According to the Involvement Matrix ( 100 ) — a matrix that can be used to assess the contribution of patients in research and design —, giving feedback can be classified as ‘listeners’ or ‘co-thinkers,’ which are both low-involvement roles. Moreover, a review of the members of the DSM task forces and working groups listed in the introductions of the DSMs shows patients have never been part of the DSM task force and thus never been part of the decision-making process ( 96 ). Human-centered design is difficult to achieve when people with lived experience are not involved from preparation to implementation but are only asked to give feedback on expert consensus ( 88 ).

In the participation era, using a design approach in mental health care without engaging important stakeholders can be problematic. For example, it is evident that the involvement of people with lived experience changes the nature of an intervention dramatically, as people’s unique first-hand experiences, insights about mental states, and individual meaning and needs are often different in design activities as opposed to what general scientific and web-related resources suggest ( 101 , 102 ). Further, clear differences are reported around designers, researchers, and clinicians on one side and service user ideas of meaningful interventions on the other ( 102 , 103 ). Thus, the meaningful engagement of people with lived experience in design processes always exposes gaps between general research and the interests and lives of service users ( 104 ). This makes the participation of people with lived experience in developing innovative concepts — and, as such, in the conceptual design of DSM spectra of mental distress — essential because their absence in design processes may lead to ineffective outcomes ( 102 ). This design perspective may explain some of the negative effects of the DSM. The classifications aimed to be empirical constructs reflecting reality, yet phenomena such as reification and the classificatory looping effect emerged ( 42 , 51 ). From a design perspective, the emergence of these effects may have a simpler explanation than previously presumed: the premature over-commitment in the DSM’s design processes without input from individuals with firsthand experiences.

3.3 Shifting the premature over-commitment to iterative exploration

The centrist development approach used to design the DSM implicitly frames people with mental distress as ‘ordinary people,’ resulting in ‘average solutions’ because their experiences are decontextualized and lumped together on a group level — eventually leading to general descriptions for a universal appliance. Instead, a more human-centered iterative design process in which people with lived experience play an important role, preferably as decision-makers, can promote the design of spectra of mental distress that leave room for idiosyncrasies that correspond with people’s living environments on an individual level. This can potentially ensure that they are actually helpful for shared decision-making between patients and professionals and resonate in person-centered mental health care. A design approach is feasible for this aim because design processes are not searching for a singular ‘truth’ but rather exploring the multiple ‘truths’ that may be relevant in different contexts ( 105 ). This can be of added value to conceptualizing spectra of mental distress, which is known to have characteristics that overlap between people but also to have a unique phenomenology and contextual foundation for each individual — in the case of mental distress, there literally are multiple truths dependable on who and what you ask in what time and place. Furthermore, design approaches enable exploration and discovery ( 106 ). Designers consistently draw cues from the environment and introduce new variables into the same environment to eventually discover what does and does not work ( 107 ). This explorative attitude also ensures the discovery of unique insights, such as people’s experiential knowledge and contexts. Therefore, from a design perspective, predetermining solutions might be ineffective for arriving at DSM innovation. This is, for example, aptly described by Owens et al. ( 101 ):

“… the iterative nature of the participatory process meant that, although a preliminary programme for the whole workshop series was drawn up at the outset, plans had to be revised in response to the findings from each session. The whole process required flexibility, a constantly open mind and a willingness to embrace the unexpected”.

These insights illustrate the core of design that can guide the development of future DSM iterations: design enables the task force to learn about mental health problems without an omniscient perspective by iteratively developing and testing conceptualizations in the environment in partnership with the target group. As participatory design studies consistently demonstrate, solutions cannot be predetermined solely based on research and resources. The involvement of individuals with lived experience and their contexts invariably uncovers crucial serendipitous insights that challenge the perspectives on the problem. This can expose important misconceptions, such as the tendency to underestimate the complexity of human experience and decontextualize it from its environment.

3.4 Insights that could inform a procedure for co-designing spectra of mental distress

People with lived experience need to be highly involved in developing meaningful spectra of mental distress to guide conversations in clinical practice. As we now have a comprehensive understanding of what design approaches can offer to the development procedure of a lived experience-informed DSM, we will highlight these insights in this paragraph.

3.4.1 Balance academic research with lived experience insights

In the design procedure of a future DSM, academic research can be used to learn about people’s experiences of mental distress but never as the source alone for the development of spectra of mental distress. In this way, designers and researchers in mental health care need to involve people with lived experience at the heart of design processes as partners and come to unique insights together without an omniscient perspective. The aim should not be to design general descriptions but to design spectra that are flexible enough to adapt to local needs and constraints for the various parties using them yet robust enough to maintain a common identity across different locations. This allows the DSM to have different meanings in different social worlds, while at the same time, their structure is common enough for more than one world to recognize them.

3.4.2 Prevent premature overcommitment in the design process

Conceptualizations of spectra of mental distress must not be predetermined, and there should be no overcommitment to concepts in the early phases of the project. Thus, the task force should avoid viewing mental distress too narrowly, too early on in the process. This enables the evolution of lived experience-based spectra in an iterative design- and test process. The starting point should be an open representation of mental distress and discover together with people with lived experience how this could be best conceptualized and what language should be used. This allows room for exploring and discovering what works and aligns with patients’ needs and experiences in their living environments and professionals’ needs in their work environments.

3.4.3 Designing and testing is also a form of research

Researchers and designers should realize that designing and testing conceptualizations in partnership with people with lived experience also results in unique knowledge that can guide the development — designing and testing the developed concepts is a form of research. For example, exploring if a certain designed spectrum resonates as a conversation piece between patients and professionals in clinical practice provides qualitative insights that cannot be predicted beforehand. In this way, science and design can complement the innovation of the DSM: science benefits from a design approach, while design benefits from scientific methods ( 108 ). Flexible navigation between design and science would indicate that the developed DSM can be meaningful as a conversation piece in clinical practice.

3.4.4 Good design comes before effective science

Good design comes before effective science, as innovations are useless if not used, even if they are validated by science ( 85 ). Although the development of the DSM is often described as a scientific process, our analysis indicates that it is more accurately described as a design process. As a design process, it requires a methodologically sound design approach that is suitable for involving patients and people with lived experience. Co-design is a great contender for this purpose, as a systematic review showed this approach had the highest level of participant involvement in mental health care innovation ( 89 ). Although people with lived experience have never been involved as decision-makers, this should be the aim of the design process of a novel DSM in the participation era. This promotes lived experience leadership in design and, ultimately, contributes to more effective science.

3.4.5 Avoid tokenism and co-optation

Involving people with lived experience as decision-makers in redesigning the DSM must avoid tokenism and co-optation and address power imbalances. The first step that the task force can take is to use the Involvement Matrix ( 100 ) together with people with lived experiences to systematically and transparently plan, reflect, and report on everyone’s contribution to the design process. This has not been prioritized in the past DSM revisions. In the end, transparency and honesty about collaboration can support the empowerment of people with firsthand perspectives and shift the power imbalance towards co-creation for more human-centered mental health care. This is needed, as the involvement of people with lived experience in design and research processes is currently too low and obscured by vague terms and bad reporting.

4 Discussion and conclusion

In this hypothesis and theory paper, we have argued that the current role of the DSM, as an operating manual for professionals, can be reconsidered as a boundary object and conversation piece for patients and professionals in clinical practice that stimulates dialogue about mental distress. In this discussion, we will address five themes. First, while we argued that research acknowledges the absence of empirical support for biological causation, we believe characterizing the DSM as entirely non-empirical may be incorrect. Second, we discuss our perspective on balancing between a too-narrow medical perspective and a too-broadly individualized perspective. Third, we discuss why mental health care also needs novel methods for inquiry if the DSM is reconsidered as a conversation piece. Fourth, we discuss that while we are certain that design approaches can be fruitful for redesigning the DSM, some challenges regarding tokenism, co-optation must be addressed. We conclude by examining various methodological challenges and offering recommendations for the co-design process of the DSM.

4.1 Redesigning instead of discarding the DSM

The DSM is too deeply entrenched in mental health care to discard it simply. The DSM is embedded in not only mental health care but also society. For instance, a DSM classification is necessary in the Netherlands to get mental health care reimbursement, qualify for additional education test time, or receive subsidized assisted living. Moreover, it is ingrained in research and healthcare funding, making it unproductive and somewhat dangerous to discard without an alternative, as it may jeopardize access to care and impact insurance coverage for treatment and services that people with mental distress need. Therefore, we posited that instead of discarding the DSM, its role should be reconsidered in a mental health care system centered around shared decision-making and co-creation to eliminate pervasive effects such as the disengagement of patients, reification, disorderism, and the psychiatrization of society. However, the DSM categories are not entirely a priori constructed as is sometimes claimed, as the psychiatric symptom space and diagnostic categories took shape in the late nineteenth century through decades of observation ( 109 ).

While this adds important nuance to the idea that the design of the DSM is entirely non-empirical, it does not invalidate the argument that the DSM design is grounded in a potentially false ontology ( 64 ). Though the lack of evidence does not necessarily indicate evidence of absence, and the biological context in some way plays a role, research shows various other dimensions of life — including the social, historical, relational, environmental, and more — also influence mental distress, yet are significantly underemphasized in its current design. We believe that we showed this manifests itself most prominently in the various highly arbitrary classification designs that can confuse the professional and the patient and appear limited in providing meaningful guidance for clinical practice, design, and research. That is why we have proposed redesigning the next iteration of the DSM to primarily focus on formulating a set of spectra of distress. Reconsidering the DSM leverages one of its biggest strengths: the DSM is not bound by an analytic procedure but rather is guided by scientific debate ( 17 ). Further, developments and amendments to psychiatric classification systems have always reflected wider social and cultural developments ( 110 ). The recognition, implementation, and impact of the DSM in Western countries can even be seen as a reason not to focus on developing alternative models but rather to redesign the DSM so that it conceptually aligns with the social developments, scientific findings, and needs of people in the 21st century, as it is already deeply embedded in systems. Given that DSM classifications are now recognized as inaccurate depictions of the reality of mental distress ( 98 ) and that, at the same time, mental health care is shifting towards person-centeredness and shared decision-making, we believe the proposals in this article are not radical but rather the most meaningful way forward to accommodate diverse perspectives.

4.2 Balancing between a too-narrow medical categorization and a too-broadly individualized approach

From a classical psychopathological perspective, integrating the lived experiences of those with mental distress into the redevelopment of the DSM as a boundary object presents certain conceptual challenges. For example, uncritically overemphasizing individual experiences might lead to an underappreciation of psychopathological manifestations like, for example, altered perceptions. Conversely, excluding people with lived experience from the DSM’s design processes has resulted in its own conceptual and epistemic issues, such as undervaluing the idiographic, contextual, and phenomenological aspects of individual mental distress. Therefore, we argue that achieving a balance between these differing but crucial perspectives should result from a co-design procedure for a revised DSM. Determining this balance before obtaining results from such a process is too premature and arbitrary and would contradict our recommendation to prevent over-commitment in the early stages of the design process. As people with lived experience were never previously involved, it is impossible to predict the outcomes of a co-design procedure or hypothesize about a clear distinction between these perspectives in the DSM’s conceptual development beforehand. As seen in past iterations, prematurely drawing rigid lines could hinder the design process and result in design fixation. From the perspective of boundary objects, the DSM cannot have one dominant perspective if it is to function effectively. All stakeholders must be able to give meaning to the spectra of mental distress from their own activity systems, and these perspectives should be equal in order to create a shared awareness of the different perspectives involved. A DSM designed as a boundary object triggers dialogical learning mechanisms, ensuring the multiple perspectives are harmonized rather than adjusted to fit one another, ensuring no single perspective prevails over the others or consensus is pursued ( 71 , 111 ).

4.3 Novel methods for inquiry to accompany the reconsidered role of the DSM

If the DSM is reconsidered and designed as a conversation piece and classifications are replaced by spectra, in clinical practice, a unique language needs to be co-developed between the patient and the professional, and an equal relationship is important to ally. For example, if we consider the person-specific meanings of altered perceptions, they need to be explored, as they have clinical relevance. However, for such purposes, current diagnostic methods in clinical practice are limiting because they are highly linguistic and tailored to classification systems and the needs and praxis of the professionals. This can impede the DSM’s effectiveness as a tool for dialogue. Expressing the uniqueness of an experience of mental distress is difficult — especially during a mental crisis — let alone effectively communicating it to a professional. While people with mental distress can effectively communicate their behaviors and complaints, which fits the current use of the DSM, people have far more embodied and experiential knowledge of their distress. How people cope with their mental distress in the contexts they are living in is very difficult to put into words without first making these personal and contextual insights tangible ( 41 ), yet this is essential information for when the DSM is used as a boundary object and conversation piece. To accommodate the patient in making this knowledge tangible, the professional becomes more of a facilitator than an expert, emphasizing therapeutic relationships and the healing effects of ritualized care interactions ( 39 ). This transformation requires novel co-creative methods for inquiry ( 41 ) and professional training ( 39 ). Therefore, expanding the diagnostic toolkit with innovative and creative tools and embracing professionals such as art therapists, social workers, and advanced nurse practitioners to enable and support patients to convey their narratives and needs in their own way is essential if the DSM is to be used as a boundary object and conversation piece.

4.4 Promoting lived experience leadership in the co-design procedure

Despite longstanding calls for the APA to include people with lived experience in the decision-making processes for diagnostic criteria, the DSM-5 task force did not accept this inclusion. The task force believed incorporating these perspectives could compromise objectivity in the scientific process ( 96 ). This mindset ensures that research, design, and practice remain predominantly shaped by academics and professionals, causing conventional mental health care to perpetuate itself. It continues to repeat the same approaches and consequently achieves the same results. Therefore, people with lived experience should have more influence in the participation era to accelerate change in mental health care. This proposition comes with some challenges regarding power imbalances that need addressing. While it is acknowledged that the involvement of individuals with lived experience yields unique insights and can serve as strong collaborators and knowledgeable contributors, they are never given decision-making authority in design processes in mental health care ( 88 , 89 , 92 ) or in the DSM’s development processes ( 96 ). This lack of authority impedes lived experience leadership ( 91 , 112 ) and subsequently stands in the way of effectively reconsidering and redesigning the DSM. To avoid tokenism, the DSM revision process should not settle for low engagement and involvement but set the bar higher by redressing power imbalances ( 113 ). Furthermore, in the co-design process of the DSM, the task force should not view objectivity as the opposite of subjectivity or strive for consensus. Instead, they should value group discussions and disagreements, encouraging stakeholders to debate and explore the sources of their differing perspectives and knowledge ( 96 ). Shifting towards lived experience leadership starts with perceiving and engaging people with lived experiences of mental distress as experts of their experiences in iterative design and research processes and giving them this role in revising the DSM.

4.5 Methodological considerations for a co-design procedure of the DSM

Merely positioning people with lived experience as partners and decision-makers is insufficient; there are also significant methodological concerns regarding the execution of design research in mental health care. Although iteration and participation are essential for design in mental health care, as designers focus on the unmet needs of service users and ways to improve care ( 114 ), research shows design is not always executed iteratively, and end users are not always involved. For example, about one-third of projects that designed mental health interventions did not adopt an iterative process ( 85 ). The engagement of end users in design processes in mental health is also not yet a common practice. For instance, a systematic review of serious games in mental health for anxiety and depression found that only half of these games, even while reporting using a participatory approach, were designed with input from the intended end-users ( 115 ). A systematic review of design processes that aimed to design innovations for people with psychotic symptoms overlaps these findings, as less than half of the studies demonstrated a high level of participant involvement in their design processes ( 89 ).

The low level of involvement and lack of iterative approaches in mental health care design offer valuable insights for future processes. If the DSM task force aims to adopt a co-design approach, it should incorporate these lessons to enhance design effectiveness. First, the task force must understand that design has a different aim, culture, and methods than the sciences ( 116 ). The scientific approach typically implies investigating the natural world through controlled experiments, classifications, and analysis, emphasizing objectivity, rationality, neutrality, and a commitment to truth. In contrast, a design approach focuses on studying the artificial world, employing methods such as modeling, pattern formation, and synthesis, guided by core values of practicality, ingenuity, empathy, and concern for appropriateness. Second, the task force should consider the known challenges they will encounter and need to navigate to let the paradigms be complementary in practice ( 117 ). Further, the task force should consider that the nature of design is exploratory, iterative, uncertain, and a social form of inquiry and synthesis that is never perfect and never quite finished ( 84 ). This requires tolerating ambiguity and having trust ( 101 ). Lastly, more transparency in the participatory work of the task force is called for, beginning with being honest, being detailed, addressing power imbalances, being participatory in reporting the participatory approach, and being excited and enthusiastic about going beyond tokenistic engagement ( 118 ).

Despite these challenges, transforming psychiatric diagnoses by reconsidering and redesigning the DSM as a boundary object and conversation piece could be a step in the right direction. This would shift the power balance towards shared ownership in a participation era that fosters dialogue instead of diagnosis. We hope this hypothesis and theory paper can give decisive impulses to the much-needed debate on and development of psychiatric diagnoses and, in the end, contribute to lived experience-informed psychiatric epistemology. Furthermore, as a product of an equal co-production process between various disciplines and types of knowledge, this paper shows it is possible to harmonize perspectives on a controversial topic such as the DSM.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author/s.

Author contributions

LV: Conceptualization, Methodology, Project administration, Writing – original draft, Writing – review & editing. GT: Conceptualization, Methodology, Visualization, Writing – original draft, Writing – review & editing. JVO: Conceptualization, Writing – original draft, Writing – review & editing. SM: Conceptualization, Writing – original draft, Writing – review & editing. JV: Writing – original draft, Writing – review & editing. NB: Writing – original draft, Writing – review & editing.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. We appreciate the financial support of the FAITH Research Consortium, GGZ-VS University of Applied Science, as well as from the NHL Stenden University of Applied Sciences PhD program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Acknowledgments

We thank the reviewers for their thorough reading of our manuscript and valuable comments, which improved the quality of our hypothesis and theory paper.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Fischer BA. A review of american psychiatry through its diagnoses. J Nervous Ment Dis . (2012) 200:1022–30. doi: 10.1097/NMD.0b013e318275cf19

CrossRef Full Text | Google Scholar

2. Grob GN. Origins of DSM-I: a study in appearance and reality. Am J Psychiatry . (1991) 148:421–31. doi: 10.1176/ajp.148.4.421

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Brendel DH. Healing psychiatry: A pragmatic approach to bridging the science/humanism divide. Harvard Rev Psychiatry . (2004) 12:150–7. doi: 10.1080/10673220490472409

4. Braslow JT. Therapeutics and the history of psychiatry. Bull History Med . (2000) 74:794–802. doi: 10.1353/bhm.2000.0161

5. Kawa S, Giordano J. A brief historicity of the Diagnostic and Statistical Manual of Mental Disorders: Issues and implications for the future of psychiatric canon and practice. Philosophy Ethics Humanities Med . (2012) 7:2. doi: 10.1186/1747-5341-7-2

6. Mayes R, Horwitz AV. DSM-III and the revolution in the classification of mental illness. J History Behav Sci . (2005) 41:249–67. doi: 10.1002/(ISSN)1520-6696

7. American Psychiatric Association. Diagnostic & Statistical manual of Mental Disorders . 3rd edition. Washington, D.C: American Psychiatric Association (1980).

Google Scholar

8. Gambardella A. Science and innovation: the US pharmaceutical industry during the 1980s . New York: Cambridge University Press (1995). doi: 10.1017/CBO9780511522031

9. Klerman GL, Vaillant GE, Spitzer RL, Michels R. A debate on DSM-III. Am J Psychiatry . (1984) 141:539–53. doi: 10.1176/ajp.141.4.539

10. Scull A. American psychiatry in the new millennium: a critical appraisal. psychol Med . (2021) 51:1–9. doi: 10.1017/S0033291721001975

11. American Psychiatric Association. Diagnostic and statistical manual of mental disorders . 4th edition. Washington, D.C: American Psychiatric Association (1994).

12. Shaffer D. A participant’s observations: preparing DSM-IV. Can J Psychiatry . (1996) 41:325–9. doi: 10.1177/070674379604100602

13. Follette WC, Houts AC. Models of scientific progress and the role of theory in taxonomy development: A case study of the DSM. J Consulting Clin Psychol . (1996) 64:1120–32. doi: 10.1037//0022-006X.64.6.1120

14. American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-5-TR . 5th ed. Washington, Dc: American Psychiatric Association Publishing (2022). doi: 10.1176/appi.books.9780890425787

15. Pickersgill MD. Debating DSM-5: diagnosis and the sociology of critique. J Med Ethics . (2013) 40:521–5. doi: 10.1136/medethics-2013-101762

16. Hyman SE. The diagnosis of mental disorders: the problem of reification. Annu Rev Clin Psychol . (2010) 6:155–79. doi: 10.1146/annurev.clinpsy.3.022806.091532

17. Zachar P, Kendler KS. Psychiatric disorders: A conceptual taxonomy. Am J Psychiatry . (2007) 164:557–65. doi: 10.1176/ajp.2007.164.4.557

18. Francis A, First MB, Pincus HA. DSM-IV Guidebook . Washington DC: American Psychiatric Association (1995).

19. Spitzer RL, Williams JBW, Endicott J. Standards for DSM-5 reliability. Am J Psychiatry . (2012) 169:537–7. doi: 10.1176/appi.ajp.2012.12010083

20. Gómez-Carrillo A, Kirmayer LJ, Aggarwal NK, Bhui KS, Fung KP-L, Kohrt BA, et al. Integrating neuroscience in psychiatry: a cultural–ecosocial systemic approach. Lancet Psychiatry . (2023) 10:296–304. doi: 10.1016/s2215-0366(23)00006-8

21. Morgan C, Charalambides M, Hutchinson G, Murray RM. Migration, ethnicity, and psychosis: toward a sociodevelopmental model. Schizophr Bull . (2010) 36:655–64. doi: 10.1093/schbul/sbq051

22. Howes OD, Murray RM. Schizophrenia: an integrated sociodevelopmental-cognitive model. Lancet . (2014) 383:1677–87. doi: 10.1016/S0140-6736(13)62036-X

23. Alegría M, NeMoyer A, Falgàs Bagué I, Wang Y, Alvarez K. Social determinants of mental health: where we are and where we need to go. Curr Psychiatry Rep . (2018) 20. doi: 10.1007/s11920-018-0969-9

24. Jeste DV, Pender VB. Social determinants of mental health. JAMA Psychiatry . (2022) 79:283–4. doi: 10.1001/jamapsychiatry.2021.4385

25. Huggard L, Murphy R, O’Connor C, Nearchou F. The social determinants of mental illness: A rapid review of systematic reviews. Issues Ment Health Nurs . (2023) 44:1–11. doi: 10.1080/01612840.2023.2186124

26. Kirkbride JB, Anglin DM, Colman I, Dykxhoorn J, Jones PB, Patalay P, et al. The social determinants of mental health and disorder: evidence, prevention and recommendations. World psychiatry: Off J World Psychiatr Assoc (WPA) . (2024) 23:58–90. doi: 10.1002/wps.21160

27. Alon N, Macrynikola N, Jester DJ, Keshavan M, Reynolds ICF, Saxena S, et al. Social determinants of mental health in major depressive disorder: umbrella review of 26 meta-analyses and systematic reviews. Psychiatry Res . (2024) 335:115854. doi: 10.1016/j.psychres.2024.115854

28. Read J, Bentall RP. Negative childhood experiences and mental health: theoretical, clinical and primary prevention implications. Br J Psychiatry . (2012) 200:89–91. doi: 10.1192/bjp.bp.111.096727

29. van Os J, Kenis G, Rutten BPF. The environment and schizophrenia. Nature . (2010) 468:203–12. doi: 10.1038/nature09563

30. Köhne A, de Graauw LP, Leenhouts-van der Maas R, van Os J. Clinician and patient perspectives on the ontology of mental disorder: a qualitative study. Front Psychiatry . (2023) 14:1081925. doi: 10.3389/fpsyt.2023.1081925

31. Richter D, Dixon J. Models of mental health problems: a quasi-systematic review of theoretical approaches. J Ment Health . (2022) 32:1–11. doi: 10.1080/09638237.2021.2022638

32. World Health Organization. Guidance on community mental health services: Promoting person-centred and rights-based approaches (2021). Available online at: https://www.who.int/publications/i/item/9789240025707 .

33. Huber M, Knottnerus JA, Green L, Horst HVD, Jadad AR, Kromhout D, et al. How should we define health? BMJ . (2011) 343:d4163–3. doi: 10.1136/bmj.d4163

34. Davidson L. The recovery movement: implications for mental health care and enabling people to participate fully in life. Health Affairs . (2016) 35:1091–7. doi: 10.1377/hlthaff.2016.0153

35. Dixon LB, Holoshitz Y, Nossel I. Treatment engagement of individuals experiencing mental illness: review and update. World Psychiatry . (2016) 15:13–20. doi: 10.1002/wps.20306

36. Karbouniaris S. Let’s tango! Integrating professionals’ lived experience in the tranformation of mental health services. (2023). Available online at: https://hdl.handle.net/1887/3640655 .

PubMed Abstract | Google Scholar

37. Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared decision making in the NHS. BMJ . (2010) 341:c5146–6. doi: 10.1136/bmj.c5146

38. Oueslati R, Woudstra AJ, Alkirawan R, Reis R, Zaalen Yv, Slager MT, et al. What value structure underlies shared decision making? A qualitative synthesis of models of shared decision making. Patient Educ Couns . (2024) 124:108284. doi: 10.1016/j.pec.2024.108284

39. van Os J, Guloksuz S, Vijn TW, Hafkenscheid A, Delespaul P. The evidence-based group-level symptom-reduction model as the organizing principle for mental health care: time for change? World Psychiatry . (2019) 18:88–96. doi: 10.1002/wps.20609

40. Meerman S. ADHD and the power of generalization: exploring the faces of reification. researchrugnl . (2019). doi: 10.33612/diss.84379221

41. Veldmeijer L, Terlouw G, van ‘t VJ, van Os J, Boonstra N. Design for mental health: can design promote human-centred diagnostics? Design Health . (2023) 7:1–19. doi: 10.1080/24735132.2023.2171223

42. te Meerman S, Freedman JE, Batstra L. ADHD and reification: Four ways a psychiatric construct is portrayed as a disease. Front Psychiatry . (2022) 13:1055328. doi: 10.3389/fpsyt.2022.1055328

43. Guloksuz S, van Os J. The slow death of the concept of schizophrenia and the painful birth of the psychosis spectrum. psychol Med . (2017) 48:229–44. doi: 10.1017/S0033291717001775

44. Os Jv, Scheepers F, Milo M, Ockeloen G, Guloksuz S, Delespaul P. It has to be better, otherwise we will get stuck.” A Review of Novel Directions for Mental Health Reform and Introducing Pilot Work in the Netherlands. Clin Pract Epidemiol Ment Health . (2023) 19. doi: 10.2174/0117450179271206231114064736

45. Bowker GC. Susan Leigh Star. Sorting Things Out: Classification and Its Consequences (Inside technology) . Cambridge, MA: Mit Press (1999). doi: 10.7551/mitpress/6352.001.0001

46. Perkins A, Ridler J, Browes D, Peryer G, Notley C, Hackmann C. Experiencing mental health diagnosis: a systematic review of service user, clinician, and carer perspectives across clinical settings. Lancet Psychiatry . (2018) 5:747–64. doi: 10.1016/S2215-0366(18)30095-6

47. Ben-Zeev D, Young MA, Corrigan PW. DSM-V and the stigma of mental illness. J Ment Health . (2010) 19:318–27. doi: 10.3109/09638237.2010.492484

48. Thornicroft G. Shunned: discrimination against people with mental illness . Oxford: Oxford University Press (2009).

49. Köhne ACJ. The ontological status of a psychiatric diagnosis: the case of neurasthenia. Philosophy Psychiatry Psychol . (2019) 26:E–1-E-11. doi: 10.1353/ppp.2019.0008

50. Hacking I. Kinds of people: moving targets. Proceedings of the British Academy . Oxford: Oxford University Press Inc (2007). doi: 10.5871/bacad/9780197264249.003.0010

51. Hacking I. The social construction of what? Harvard: Harvard University Press (1999).

52. Wienen AW, Sluiter MN, Thoutenhoofd E, de Jonge P, Batstra L. The advantages of an ADHD classification from the perspective of teachers. Eur J Special Needs Educ . (2019) 34:1–14. doi: 10.1080/08856257.2019.1580838

53. Franz DJ, Richter T, Lenhard W, Marx P, Stein R, Ratz C. The influence of diagnostic labels on the evaluation of students: a multilevel meta-analysis. Educ Psychol Rev . (2023) 35. doi: 10.1007/s10648-023-09716-6

54. Köhne ACJ. In search of a better ontology of mental disorder. (2022). doi: 10.33540/1591

55. de Ridder B, van Hulst BM. Disorderism: what it is and why it’s a problem. Tijdschr Psychiatr . (2023) 65:163–6.

56. Kazda L, Bell K, Thomas R, McGeechan K, Sims R, Barratt A. Overdiagnosis of Attention-Deficit/Hyperactivity Disorder in children and adolescents. JAMA Network Open . (2021) 4. doi: 10.1001/jamanetworkopen.2021.5335

57. Beeker T, Mills C, Bhugra D, te Meerman S, Thoma S, Heinze M, et al. Psychiatrization of society: A conceptual framework and call for transdisciplinary research. Front Psychiatry . (2021) 12:645556. doi: 10.3389/fpsyt.2021.645556

58. Os Jv, Guloksuz S. Population salutogenesis—The future of psychiatry? JAMA Psychiatry . (2024) 81:115–5. doi: 10.1001/jamapsychiatry.2023.4582

59. Star SL, Griesemer JR. Institutional ecology, `Translations’ and boundary objects: amateurs and professionals in berkeley’s museum of vertebrate zoology, 1907-39. Soc Stud Sci . (1989) 19:387–420. doi: 10.1177/030631289019003001

60. Marsman A, Pries L-K, ten Have M, de Graaf R, van Dorsselaer S, Bak M, et al. Do current measures of polygenic risk for mental disorders contribute to population variance in mental health? Schizophr Bull . (2020). doi: 10.1093/schbul/sbaa086

61. van Os J. Personalized psychiatry: Geen vervanger van persoonlijke psychiatrie. Tijdschrift voor Psychiatr . (2018) 60:199–204.

62. Hyman SE. Psychiatric disorders: grounded in human biology but not natural kinds. Perspect Biol Med . (2021) 64:6–28. doi: 10.1353/pbm.2021.0002

63. Schleim S. Why mental disorders are brain disorders. And why they are not: ADHD and the challenges of heterogeneity and reification. Front Psychiatry . (2022) 13:943049. doi: 10.3389/fpsyt.2022.943049

64. Köhne ACJ. The relationalist turn in understanding mental disorders: from essentialism to embracing dynamic and complex relations. Philosophy Psychiatry Psychol . (2020) 27:119–40. doi: 10.1353/ppp.2020.0020

65. Terlouw G, Veer JTv’t, Prins JT, Kuipers DA, Pierie J-PEN. Design of a digital comic creator (It’s me) to facilitate social skills training for children with autism spectrum disorder: design research approach. JMIR Ment Health . (2020) 7:e17260. doi: 10.2196/17260

66. Terlouw G, Kuipers D, van ’t Veer J, Prins JT, Pierie JPEN. The development of an escape room–based serious game to trigger social interaction and communication between high-functioning children with autism and their peers: iterative design approach. JMIR Serious Games . (2021) 9:e19765. doi: 10.2196/19765

67. Kuipers DA, Terlouw G, Wartena BO, Prins JT, Pierie JPEN. Maximizing authentic learning and real-world problem-solving in health curricula through psychological fidelity in a game-like intervention: development, feasibility, and pilot studies. Med Sci Educator . (2018) 29:205–14. doi: 10.1007/s40670-018-00670-5

68. Kajamaa A. Boundary breaking in a hospital. Learn Organ . (2011) 18:361–77. doi: 10.1108/09696471111151710

69. Sajtos L, Kleinaltenkamp M, Harrison J. Boundary objects for institutional work across service ecosystems. J Service Manage . (2018) 29:615–40. doi: 10.1108/JOSM-01-2017-0011

70. Jensen S, Kushniruk A. Boundary objects in clinical simulation and design of eHealth. Health Inf J . (2014) 22:248–64. doi: 10.1177/1460458214551846

71. Terlouw G, Kuipers D, Veldmeijer L, van ’t Veer J, Prins J, Pierie J-P. Boundary objects as dialogical learning accelerators for social change in design for health: systematic review. JMIR Hum Factors . (2021). doi: 10.2196/31167

72. Wiener HJD. Conversation pieces: the role of products in facilitating conversation. Dukespace . (2017). Available online at: https://hdl.handle.net/10161/14430 .

73. Jaspers K. General psychopathology . Baltimore, Md: Johns Hopkins University Press (1998).

74. Parnas J, Sass LA, Zahavi D. Rediscovering psychopathology: the epistemology and phenomenology of the psychiatric object. Schizophr Bull . (2012) 39:270–7. doi: 10.1093/schbul/sbs153

75. Feyaerts J, Henriksen MG, Vanheule S, Myin-Germeys I, Sass LA. Delusions beyond beliefs: a critical overview of diagnostic, aetiological, and therapeutic schizophrenia research from a clinical-phenomenological perspective. Lancet Psychiatry . (2021) 8:237–49. doi: 10.1016/S2215-0366(20)30460-0

76. Ritunnano R, Kleinman J, Whyte Oshodi D, Michail M, Nelson B, Humpston CS, et al. Subjective experience and meaning of delusions in psychosis: a systematic review and qualitative evidence synthesis. Lancet Psychiatry . (2022) 9:458–76. doi: 10.1016/S2215-0366(22)00104-3

77. Köhne ACJ, van Os J. Precision psychiatry: promise for the future or rehash of a fossilised foundation? psychol Med . (2021) 51:1409–11. doi: 10.1017/S0033291721000271

78. Boevink WA. From being a disorder to dealing with life: an experiential exploration of the association between trauma and psychosis. Schizophr Bull . (2005) 32:17–9. doi: 10.1093/schbul/sbi068

79. Slade M, Longden E. Empirical evidence about recovery and mental health. BMC Psychiatry . (2015) 15. doi: 10.1186/s12888-015-0678-4

80. Leamy M, Bird V, Boutillier CL, Williams J, Slade M. Conceptual framework for personal recovery in mental health: systematic review and narrative synthesis. Br J Psychiatry . (2011) 199:445–52. doi: 10.1192/bjp.bp.110.083733

81. Groot PC, van Os J. How user knowledge of psychotropic drug withdrawal resulted in the development of person-specific tapering medication. Ther Adv Psychopharmacol . (2020) 10:204512532093245. doi: 10.1177/2045125320932452

82. Fusar-Poli P, Estradé A, Stanghellini G, Venables J, Onwumere J, Messas G, et al. The lived experience of psychosis: A bottom-up review co-written by experts by experience and academics. World Psychiatry . (2022) 21:168–88. doi: 10.1002/wps.20959

83. Jakobsson C, Genovesi E, Afolayan A, Bella-Awusah T, Omobowale O, Buyanga M, et al. Co-producing research on psychosis: a scoping review on barriers, facilitators and outcomes. Int J Ment Health Syst . (2023) 17. doi: 10.1186/s13033-023-00594-7

84. Orlowski S, Matthews B, Bidargaddi N, Jones G, Lawn S, Venning A, et al. Mental health technologies: designing with consumers. JMIR Hum Factors . (2016) 3. doi: 10.2196/humanfactors.4336

85. Vial S, Boudhraâ S, Dumont M. Human-centered design approaches in digital mental health interventions: exploratory mapping review. JMIR Ment Health . (2021) 9. doi: 10.2196/35591

86. Tindall RM, Ferris M, Townsend M, Boschert G, Moylan S. A first-hand experience of co-design in mental health service design: Opportunities, challenges, and lessons. Int J Ment Health Nurs . (2021) 30. doi: 10.1111/inm.12925

87. Schouten SE, Kip H, Dekkers T, Deenik J, Beerlage-de Jong N, Ludden GDS, et al. Best-practices for co-design processes involving people with severe mental illness for eMental health interventions: a qualitative multi-method approach. Design Health . (2022) 6:316–44. doi: 10.1080/24735132.2022.2145814

88. Veldmeijer L, Terlouw G, van Os J, van Dijk O, van ’t Veer J, Boonstra N. The involvement of service users and people with lived experience in mental health care innovation through design: systematic review. JMIR Ment Health . (2023) 10:e46590. doi: 10.2196/46590

89. Veldmeijer L, Terlouw G, van Os J, van ’t Veer J, Boonstra N. The frequency of design studies targeting people with psychotic symptoms and features in mental health care innovation: A research letter of a secondary data analysis. JMIR Ment Health . (2023) 11. doi: 10.2196/54202

90. Hawke LD, Sheikhan NY, Bastidas-Bilbao H, Rodak T. Experience-based co-design of mental health services and interventions: A scoping review. SSM Ment Health . (2024) 5:100309. doi: 10.1016/j.ssmmh.2024.100309

91. Scholz B, Gordon S, Happell B. Consumers in mental health service leadership: A systematic review. Int J Ment Health Nurs . (2016) 26:20–31. doi: 10.1111/inm.12266

92. Brotherdale R, Berry K, Branitsky A, Bucci S. Co-producing digital mental health interventions: A systematic review. Digital Health . (2024) 10. doi: 10.1177/20552076241239172

93. Scholz B. Mindfully reporting lived experience work. Lancet Psychiatry . (2024) 11:168–8. doi: 10.1016/S2215-0366(24)00007-5

94. Davis S, Pinfold V, Catchpole J, Lovelock C, Senthi B, Kenny A. Reporting lived experience work. Lancet Psychiatry . (2024) 11:8–9. doi: 10.1016/S2215-0366(23)00402-9

95. Palmer VJ, Bibb J, Lewis M, Densley K, Kritharidis R, Dettmann E, et al. A co-design living labs philosophy of practice for end-to-end research design to translation with people with lived-experience of mental ill-health and carer/family and kinship groups. Front Public Health . (2023) 11:1206620. doi: 10.3389/fpubh.2023.1206620

96. Tekin Ş. Participatory interactive objectivity in psychiatry. Philosophy Sci . (2022), 1–20. doi: 10.1017/psa.2022.47

97. Gardner C, Kleinman A. Medicine and the mind — The consequences of psychiatry’s identity crisis. New Engl J Med . (2019) 381:1697–9. doi: 10.1056/NEJMp1910603

98. Kendler KS. Potential lessons for DSM from contemporary philosophy of science. JAMA Psychiatry . (2022) 79:99. doi: 10.1001/jamapsychiatry.2021.3559

99. Fusar-Poli P, Estradé Andrés, Stanghellini G, Esposito CM, Rosfort René, Mancini M, et al. The lived experience of depression: a bottom-up review co-written by experts by experience and academics. World Psychiatry . (2023) 22:352–65. doi: 10.1002/wps.21111

100. Smits D-W, van Meeteren K, Klem M, Alsem M, Ketelaar M. Designing a tool to support patient and public involvement in research projects: the Involvement Matrix. Res Involvement Engagement . (2020) 6. doi: 10.1186/s40900-020-00188-4

101. Owens C, Farrand P, Darvill R, Emmens T, Hewis E, Aitken P. Involving service users in intervention design: a participatory approach to developing a text-messaging intervention to reduce repetition of self-harm. Health Expectations . (2010) 14:285–95. doi: 10.1111/hex.2011.14.issue-3

102. Nakarada-Kordic I, Hayes N, Reay SD, Corbet C, Chan A. Co-designing for mental health: creative methods to engage young people experiencing psychosis. Design Health . (2017) 1:229–44. doi: 10.1080/24735132.2017.1386954

103. Orlowski SK, Lawn S, Venning A, Winsall M, Jones GM, Wyld K, et al. Participatory research as one piece of the puzzle: A systematic review of consumer involvement in design of technology-based youth mental health and well-being interventions. JMIR Hum Factors . (2015) 2:e12. doi: 10.2196/humanfactors.4361

104. Gammon D, Strand M, Eng LS. Service users’ perspectives in the design of an online tool for assisted self-help in mental health: a case study of implications. Int J Ment Health Syst . (2014) 8. doi: 10.1186/1752-4458-8-2

105. Wendt T. Design for Dasein: understanding the design of experiences . San Bernardino, California: Thomas Wendt (2015).

106. Bateson G. Steps to an ecology of mind . Chicago: University Of Chicago Press (1972).

107. Schön DA. The Reflective Practitioner: How Professionals Think in Action . Aldershot, U.K: Ashgate, Cop (1991).

108. Verkerke GJ, van der Houwen EB, Broekhuis AA, Bursa J, Catapano G, McCullagh P, et al. Science versus design; comparable, contrastive or conducive? J Mechanical Behav Biomed Materials . (2013) 21:195–201. doi: 10.1016/j.jmbbm.2013.01.009

109. Zachar P. Psychopathology beyond psychiatric symptomatology. Philosophy Psychiatry Psychol . (2020) 27:141–3. doi: 10.1353/ppp.2020.0021

110. Foucault M. Madness and Civilization: A History of Insanity in the Age of Reason . London: Routledge (1961). doi: 10.4324/9780203278796

111. Star SL. This is not a boundary object: reflections on the origin of a concept. Science Technology Hum Values . (2010) 35:601–17. doi: 10.1177/0162243910377624

112. Jones N. Lived experience leadership in peer support research as the new normal. Psychiatr Serv . (2022) 73:125. doi: 10.1176/appi.ps.73201

113. Scholz B. We have to set the bar higher: towards consumer leadership, beyond engagement or involvement. Aust Health Rev . (2022) 46. doi: 10.1071/AH22022

114. Rivard L, Lehoux P, Hagemeister N. Articulating care and responsibility in design: A study on the reasoning processes guiding health innovators’ “care-making” practices. Design Stud . (2021) 72:100986. doi: 10.1016/j.destud.2020.100986

115. Dekker MR, Williams AD. The use of user-centered participatory design in serious games for anxiety and depression. Games Health J . (2017) 6:327–33. doi: 10.1089/g4h.2017.0058

116. Cross N. Designerly ways of knowing . London: Springer (2006).

117. Groeneveld B, Dekkers T, Boon B, D’Olivo P. Challenges for design researchers in healthcare. Design Health . (2018) 2:305–26. doi: 10.1080/24735132.2018.1541699

118. Scholz B, Stewart S, Pamoso A, Gordon S, Happell B, Utomo B. The importance of going beyond consumer or patient involvement to lived experience leadership. Int J Ment Health Nurs . (2023) 33:1–4. doi: 10.1111/inm.13282

Keywords: psychiatry, diagnosis, design, innovation, mental health care

Citation: Veldmeijer L, Terlouw G, van Os J, te Meerman S, van ‘t Veer J and Boonstra N (2024) From diagnosis to dialogue – reconsidering the DSM as a conversation piece in mental health care: a hypothesis and theory. Front. Psychiatry 15:1426475. doi: 10.3389/fpsyt.2024.1426475

Received: 01 May 2024; Accepted: 22 July 2024; Published: 06 August 2024.

Reviewed by:

Copyright © 2024 Veldmeijer, Terlouw, van Os, te Meerman, van ‘t Veer and Boonstra. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lars Veldmeijer, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. PPT

    what hypothesis testing definition

  2. Statistical Hypothesis Testing: Step by Step

    what hypothesis testing definition

  3. Hypothesis Testing Solved Examples(Questions and Solutions)

    what hypothesis testing definition

  4. What is Hypothesis Testing? Types and Methods

    what hypothesis testing definition

  5. Hypothesis Testing

    what hypothesis testing definition

  6. Hypothesis Testing in Statistics (Formula)

    what hypothesis testing definition

COMMENTS

  1. Hypothesis Testing

    Hypothesis testing example. You want to test whether there is a relationship between gender and height. Based on your knowledge of human physiology, you formulate a hypothesis that men are, on average, taller than women. ... Definition and Examples The p-value shows the likelihood of your data occurring under the null hypothesis. P-values help ...

  2. Hypothesis Testing: 4 Steps and Example

    Hypothesis testing is an act in statistics whereby an analyst tests an assumption regarding a population parameter. The methodology employed by the analyst depends on the nature of the data used ...

  3. Statistical hypothesis test

    A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently support a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p ...

  4. Hypothesis Testing

    Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant. It involves the setting up of a null hypothesis and an alternate hypothesis. There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.

  5. Hypothesis Testing: Uses, Steps & Example

    Hypothesis testing involves five key steps, each critical to validating a research hypothesis using statistical methods: Formulate the Hypotheses: Write your research hypotheses as a null hypothesis (H 0) and an alternative hypothesis (H A ). Data Collection: Gather data specifically aimed at testing the hypothesis.

  6. Introduction to Hypothesis Testing

    A hypothesis test consists of five steps: 1. State the hypotheses. State the null and alternative hypotheses. These two hypotheses need to be mutually exclusive, so if one is true then the other must be false. 2. Determine a significance level to use for the hypothesis. Decide on a significance level.

  7. Statistical Hypothesis Testing Overview

    Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables. This post provides an overview of statistical hypothesis testing.

  8. 9.1: Introduction to Hypothesis Testing

    In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis.The null hypothesis is usually denoted \(H_0\) while the alternative hypothesis is usually denoted \(H_1\). An hypothesis test is a statistical decision; the conclusion will either be to reject the null hypothesis in favor ...

  9. Hypothesis Testing

    A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators. In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis true for the entire population.

  10. 7.1: Basics of Hypothesis Testing

    Test Statistic: z = ¯ x − μo σ / √n since it is calculated as part of the testing of the hypothesis. Definition 7.1.4. p - value: probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true.

  11. An Introduction to Statistics: Understanding Hypothesis Testing and

    HYPOTHESIS TESTING. A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the "alternate" hypothesis, and the opposite ...

  12. Hypothesis Testing: Definition, Uses, Limitations + Examples

    What is Hypothesis Testing? Hypothesis testing is an assessment method that allows researchers to determine the plausibility of a hypothesis. It involves testing an assumption about a specific population parameter to know whether it's true or false. These population parameters include variance, standard deviation, and median.

  13. Statistics

    Statistics - Hypothesis Testing, Sampling, Analysis: Hypothesis testing is a form of statistical inference that uses data from a sample to draw conclusions about a population parameter or a population probability distribution. First, a tentative assumption is made about the parameter or distribution. This assumption is called the null hypothesis and is denoted by H0.

  14. Hypothesis Testing explained in 4 parts

    First, the technical definition of power is 1−β. It represents that given an alternative hypothesis and given our null, sample size, and decision rule (alpha = 0.05), the probability is that we accept this particular hypothesis. We visualize the yellow area below. Second, power is really intuitive in its definition.

  15. Hypothesis testing

    hypothesis testing, In statistics, a method for testing how accurately a mathematical model based on one set of data predicts the nature of other data sets generated by the same process. Hypothesis testing grew out of quality control, in which whole batches of manufactured items are accepted or rejected based on testing relatively small samples.An initial hypothesis (null hypothesis) might ...

  16. What is Hypothesis Testing in Statistics? Types and Examples

    Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence.

  17. 3.1: The Fundamentals of Hypothesis Testing

    This tests whether the population parameter is equal to, versus less than, some specific value. Ho: μ = 12 vs. H1: μ < 12. The critical region is in the left tail and the critical value is a negative value that defines the rejection zone. Figure 3.1.3 3.1. 3: The rejection zone for a left-sided hypothesis test.

  18. Significance tests (hypothesis testing)

    Significance tests give us a formal process for using sample data to evaluate the likelihood of some claim about a population value. Learn how to conduct significance tests and calculate p-values to see how likely a sample result is to occur by random chance. You'll also see how we use p-values to make conclusions about hypotheses.

  19. What is Hypothesis Testing? Types and Methods

    The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data. The supposed truths and assumptions are based on the current data and a set of 2 hypotheses are formulated.

  20. Hypothesis Testing Definition, Steps & Examples

    Hypothesis Testing Steps. There are 5 main hypothesis testing steps, which will be outlined in this section. The steps are: Determine the null hypothesis: In this step, the statistician should ...

  21. Hypothesis Testing

    Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.

  22. A Beginner's Guide to Hypothesis Testing in Business

    3. One-Sided vs. Two-Sided Testing. When it's time to test your hypothesis, it's important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests, or one-tailed and two-tailed tests, respectively. Typically, you'd leverage a one-sided test when you have a strong conviction ...

  23. Understanding Hypothesis Testing

    Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing.

  24. A new look at the CO2 haven hypothesis using gravity model European

    In order to test the hypothesis, experimental data from China and OECD countries are used. ... According to the definition of PHP, what causes the movement of FDI between two countries is the ...

  25. Frontiers

    The definition of mental disorder in the DSM-5 was thereby conceptualized as: "… a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." ( 14 ).