Weekend batch
Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.
Free eBook: Top Programming Languages For A Data Scientist
Normality Test in Minitab: Minitab with Statistics
Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer
Hypothesis testing is the act of testing a hypothesis or a supposition in relation to a statistical parameter. Analysts implement hypothesis testing in order to test if a hypothesis is plausible or not.
In data science and statistics , hypothesis testing is an important step as it involves the verification of an assumption that could help develop a statistical parameter. For instance, a researcher establishes a hypothesis assuming that the average of all odd numbers is an even number.
In order to find the plausibility of this hypothesis, the researcher will have to test the hypothesis using hypothesis testing methods. Unlike a hypothesis that is ‘supposed’ to stand true on the basis of little or no evidence, hypothesis testing is required to have plausible evidence in order to establish that a statistical hypothesis is true.
Perhaps this is where statistics play an important role. A number of components are involved in this process. But before understanding the process involved in hypothesis testing in research methodology, we shall first understand the types of hypotheses that are involved in the process. Let us get started!
In data sampling, different types of hypothesis are involved in finding whether the tested samples test positive for a hypothesis or not. In this segment, we shall discover the different types of hypotheses and understand the role they play in hypothesis testing.
Alternative Hypothesis (H1) or the research hypothesis states that there is a relationship between two variables (where one variable affects the other). The alternative hypothesis is the main driving force for hypothesis testing.
It implies that the two variables are related to each other and the relationship that exists between them is not due to chance or coincidence.
When the process of hypothesis testing is carried out, the alternative hypothesis is the main subject of the testing process. The analyst intends to test the alternative hypothesis and verifies its plausibility.
The Null Hypothesis (H0) aims to nullify the alternative hypothesis by implying that there exists no relation between two variables in statistics. It states that the effect of one variable on the other is solely due to chance and no empirical cause lies behind it.
The null hypothesis is established alongside the alternative hypothesis and is recognized as important as the latter. In hypothesis testing, the null hypothesis has a major role to play as it influences the testing against the alternative hypothesis.
(Must read: What is ANOVA test? )
The Non-directional hypothesis states that the relation between two variables has no direction.
Simply put, it asserts that there exists a relation between two variables, but does not recognize the direction of effect, whether variable A affects variable B or vice versa.
The Directional hypothesis, on the other hand, asserts the direction of effect of the relationship that exists between two variables.
Herein, the hypothesis clearly states that variable A affects variable B, or vice versa.
A statistical hypothesis is a hypothesis that can be verified to be plausible on the basis of statistics.
By using data sampling and statistical knowledge, one can determine the plausibility of a statistical hypothesis and find out if it stands true or not.
(Related blog: z-test vs t-test )
Now that we have understood the types of hypotheses and the role they play in hypothesis testing, let us now move on to understand the process in a better manner.
In hypothesis testing, a researcher is first required to establish two hypotheses - alternative hypothesis and null hypothesis in order to begin with the procedure.
To establish these two hypotheses, one is required to study data samples, find a plausible pattern among the samples, and pen down a statistical hypothesis that they wish to test.
A random population of samples can be drawn, to begin with hypothesis testing. Among the two hypotheses, alternative and null, only one can be verified to be true. Perhaps the presence of both hypotheses is required to make the process successful.
At the end of the hypothesis testing procedure, either of the hypotheses will be rejected and the other one will be supported. Even though one of the two hypotheses turns out to be true, no hypothesis can ever be verified 100%.
(Read also: Types of data sampling techniques )
Therefore, a hypothesis can only be supported based on the statistical samples and verified data. Here is a step-by-step guide for hypothesis testing.
First things first, one is required to establish two hypotheses - alternative and null, that will set the foundation for hypothesis testing.
These hypotheses initiate the testing process that involves the researcher working on data samples in order to either support the alternative hypothesis or the null hypothesis.
Once the hypotheses have been formulated, it is now time to generate a testing plan. A testing plan or an analysis plan involves the accumulation of data samples, determining which statistic is to be considered and laying out the sample size.
All these factors are very important while one is working on hypothesis testing.
As soon as a testing plan is ready, it is time to move on to the analysis part. Analysis of data samples involves configuring statistical values of samples, drawing them together, and deriving a pattern out of these samples.
While analyzing the data samples, a researcher needs to determine a set of things -
Significance Level - The level of significance in hypothesis testing indicates if a statistical result could have significance if the null hypothesis stands to be true.
Testing Method - The testing method involves a type of sampling-distribution and a test statistic that leads to hypothesis testing. There are a number of testing methods that can assist in the analysis of data samples.
Test statistic - Test statistic is a numerical summary of a data set that can be used to perform hypothesis testing.
P-value - The P-value interpretation is the probability of finding a sample statistic to be as extreme as the test statistic, indicating the plausibility of the null hypothesis.
The analysis of data samples leads to the inference of results that establishes whether the alternative hypothesis stands true or not. When the P-value is less than the significance level, the null hypothesis is rejected and the alternative hypothesis turns out to be plausible.
As we have already looked into different aspects of hypothesis testing, we shall now look into the different methods of hypothesis testing. All in all, there are 2 most common types of hypothesis testing methods. They are as follows -
The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data.
The supposed truths and assumptions are based on the current data and a set of 2 hypotheses are formulated. A very popular subtype of the frequentist approach is the Null Hypothesis Significance Testing (NHST).
The NHST approach (involving the null and alternative hypothesis) has been one of the most sought-after methods of hypothesis testing in the field of statistics ever since its inception in the mid-1950s.
A much unconventional and modern method of hypothesis testing, the Bayesian Hypothesis Testing claims to test a particular hypothesis in accordance with the past data samples, known as prior probability, and current data that lead to the plausibility of a hypothesis.
The result obtained indicates the posterior probability of the hypothesis. In this method, the researcher relies on ‘prior probability and posterior probability’ to conduct hypothesis testing on hand.
On the basis of this prior probability, the Bayesian approach tests a hypothesis to be true or false. The Bayes factor, a major component of this method, indicates the likelihood ratio among the null hypothesis and the alternative hypothesis.
The Bayes factor is the indicator of the plausibility of either of the two hypotheses that are established for hypothesis testing.
(Also read - Introduction to Bayesian Statistics )
To conclude, hypothesis testing, a way to verify the plausibility of a supposed assumption can be done through different methods - the Bayesian approach or the Frequentist approach.
Although the Bayesian approach relies on the prior probability of data samples, the frequentist approach assumes without a probability. A number of elements involved in hypothesis testing are - significance level, p-level, test statistic, and method of hypothesis testing.
(Also read: Introduction to probability distributions )
A significant way to determine whether a hypothesis stands true or not is to verify the data samples and identify the plausible hypothesis among the null hypothesis and alternative hypothesis.
Be a part of our Instagram community
5 Factors Influencing Consumer Behavior
Elasticity of Demand and its Types
An Overview of Descriptive Analysis
What is PESTLE Analysis? Everything you need to know about it
What is Managerial Economics? Definition, Types, Nature, Principles, and Scope
5 Factors Affecting the Price Elasticity of Demand (PED)
6 Major Branches of Artificial Intelligence (AI)
Scope of Managerial Economics
Dijkstra’s Algorithm: The Shortest Path Algorithm
Different Types of Research Methods
Talk to our experts
1800-120-456-456
Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.
Any hypothetical statement we make may or may not be valid, and it is then our responsibility to provide evidence for its possibility. To approach any hypothesis, we follow these four simple steps that test its validity.
First, we formulate two hypothetical statements such that only one of them is true. By doing so, we can check the validity of our own hypothesis.
The next step is to formulate the statistical analysis to be followed based upon the data points.
Then we analyze the given data using our methodology.
The final step is to analyze the result and judge whether the null hypothesis will be rejected or is true.
It is observed that the average recovery time for a knee-surgery patient is 8 weeks. A physician believes that after successful knee surgery if the patient goes for physical therapy twice a week rather than thrice a week, the recovery period will be longer. Conduct hypothesis for this statement.
David is a ten-year-old who finishes a 25-yard freestyle in the meantime of 16.43 seconds. David’s father bought goggles for his son, believing that it would help him to reduce his time. He then recorded a total of fifteen 25-yard freestyle for David, and the average time came out to be 16 seconds. Conduct a hypothesis.
A tire company claims their A-segment of tires have a running life of 50,000 miles before they need to be replaced, and previous studies show a standard deviation of 8,000 miles. After surveying a total of 28 tires, the mean run time came to be 46,500 miles with a standard deviation of 9800 miles. Is the claim made by the tire company consistent with the given data? Conduct hypothesis testing.
All of the hypothesis testing examples are from real-life situations, which leads us to believe that hypothesis testing is a very practical topic indeed. It is an integral part of a researcher's study and is used in every research methodology in one way or another.
Inferential statistics majorly deals with hypothesis testing. The research hypothesis states there is a relationship between the independent variable and dependent variable. Whereas the null hypothesis rejects this claim of any relationship between the two, our job as researchers or students is to check whether there is any relation between the two.
Now that we are clear about what hypothesis testing is? Let's look at the use of hypothesis testing in research methodology. Hypothesis testing is at the centre of research projects.
Often after formulating research statements, the validity of those statements need to be verified. Hypothesis testing offers a statistical approach to the researcher about the theoretical assumptions he/she made. It can be understood as quantitative results for a qualitative problem.
(Image will be uploaded soon)
Hypothesis testing provides various techniques to test the hypothesis statement depending upon the variable and the data points. It finds its use in almost every field of research while answering statements such as whether this new medicine will work, a new testing method is appropriate, or if the outcomes of a random experiment are probable or not.
To find the validity of any statement, we have to strictly follow the stepwise procedure of hypothesis testing. After stating the initial hypothesis, we have to re-write them in the form of a null and alternate hypothesis. The alternate hypothesis predicts a relationship between the variables, whereas the null hypothesis predicts no relationship between the variables.
After writing them as H 0 (null hypothesis) and H a (Alternate hypothesis), only one of the statements can be true. For example, taking the hypothesis that, on average, men are taller than women, we write the statements as:
H 0 : On average, men are not taller than women.
H a : On average, men are taller than women.
Our next aim is to collect sample data, what we call sampling, in a way so that we can test our hypothesis. Your data should come from the concerned population for which you want to make a hypothesis.
What is the p value in hypothesis testing? P-value gives us information about the probability of occurrence of results as extreme as observed results.
You will obtain your p-value after choosing the hypothesis testing method, which will be the guiding factor in rejecting the hypothesis. Usually, the p-value cutoff for rejecting the null hypothesis is 0.05. So anything below that, you will reject the null hypothesis.
A low p-value means that the between-group variance is large enough that there is almost no overlapping, and it is unlikely that these came about by chance. A high p-value suggests there is a high within-group variance and low between-group variance, and any difference in the measure is due to chance only.
When forming conclusions through research, two sorts of errors are common: A hypothesis must be set and defined in statistics during a statistical survey or research. A statistical hypothesis is what it is called. It is, in fact, a population parameter assumption. However, it is unmistakable that this idea is always proven correct. Hypothesis testing refers to the predetermined formal procedures used by statisticians to determine whether hypotheses should be accepted or rejected. The process of selecting hypotheses for a given probability distribution based on observable data is known as hypothesis testing. Hypothesis testing is a fundamental and crucial issue in statistics.
The quick answer is that you must as a scientist; it is part of the scientific process. Science employs a variety of methods to test or reject theories, ensuring that any new hypothesis is free of errors. One protection to ensure your research is not incorrect is to include both a null and an alternate hypothesis. The scientific community considers not incorporating the null hypothesis in your research to be poor practice. You are almost certainly setting yourself up for failure if you set out to prove another theory without first examining it. At the very least, your experiment will not be considered seriously.
There are several types of hypothesis testing, and they are used based on the data provided. Depending on the sample size and the data given, we choose among different hypothesis testing methodologies. Here starts the use of hypothesis testing tools in research methodology.
Normality- This type of testing is used for normal distribution in a population sample. If the data points are grouped around the mean, the probability of them being above or below the mean is equally likely. Its shape resembles a bell curve that is equally distributed on either side of the mean.
T-test- This test is used when the sample size in a normally distributed population is comparatively small, and the standard deviation is unknown. Usually, if the sample size drops below 30, we use a T-test to find the confidence intervals of the population.
Chi-Square Test- The Chi-Square test is used to test the population variance against the known or assumed value of the population variance. It is also a better choice to test the goodness of fit of a distribution of data. The two most common Chi-Square tests are the Chi-Square test of independence and the chi-square test of variance.
ANOVA- Analysis of Variance or ANOVA compares the data sets of two different populations or samples. It is similar in its use to the t-test or the Z-test, but it allows us to compare more than two sample means. ANOVA allows us to test the significance between an independent variable and a dependent variable, namely X and Y, respectively.
Z-test- It is a statistical measure to test that the means of two population samples are different when their variance is known. For a Z-test, the population is assumed to be normally distributed. A z-test is better suited in the case of large sample sizes greater than 30. This is due to the central limit theorem that as the sample size increases, the samples are considered to be distributed normally.
1. Mention the types of hypothesis Tests.
There are two types of a hypothesis tests:
Null Hypothesis: It is denoted as H₀.
Alternative Hypothesis: IT is denoted as H₁ or Hₐ.
2. What are the two errors that can be found while performing the null Hypothesis test?
While performing the null hypothesis test there is a possibility of occurring two types of errors,
Type-1: The type-1 error is denoted by (α), it is also known as the significance level. It is the rejection of the true null hypothesis. It is the error of commission.
Type-2: The type-2 error is denoted by (β). (1 - β) is known as the power test. The false null hypothesis is not rejected. It is the error of the omission.
3. What is the p-value in hypothesis testing?
During hypothetical testing in statistics, the p-value indicates the probability of obtaining the result as extreme as observed results. A smaller p-value provides evidence to accept the alternate hypothesis. The p-value is used as a rejection point that provides the smallest level of significance at which the null hypothesis is rejected. Often p-value is calculated using the p-value tables by calculating the deviation between the observed value and the chosen reference value.
It may also be calculated mathematically by performing integrals on all the values that fall under the curve and areas far from the reference value as the observed value relative to the total area of the curve. The p-value determines the evidence to reject the null hypothesis in hypothesis testing.
4. What is a null hypothesis?
The null hypothesis in statistics says that there is no certain difference between the population. It serves as a conjecture proposing no difference, whereas the alternate hypothesis says there is a difference. When we perform hypothesis testing, we have to state the null hypothesis and alternative hypotheses such that only one of them is ever true.
By determining the p-value, we calculate whether the null hypothesis is to be rejected or not. If the difference between groups is low, it is merely by chance, and the null hypothesis, which states that there is no difference among groups, is true. Therefore, we have no evidence to reject the null hypothesis.
Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.
Becoming a more data-driven decision-maker can bring several benefits to your organization, enabling you to identify new opportunities to pursue and threats to abate. Rather than allowing subjective thinking to guide your business strategy, backing your decisions with data can empower your company to become more innovative and, ultimately, profitable.
If you’re new to data-driven decision-making, you might be wondering how data translates into business strategy. The answer lies in generating a hypothesis and verifying or rejecting it based on what various forms of data tell you.
Below is a look at hypothesis testing and the role it plays in helping businesses become more data-driven.
Access your free e-book today.
To understand what hypothesis testing is, it’s important first to understand what a hypothesis is.
A hypothesis or hypothesis statement seeks to explain why something has happened, or what might happen, under certain conditions. It can also be used to understand how different variables relate to each other. Hypotheses are often written as if-then statements; for example, “If this happens, then this will happen.”
Hypothesis testing , then, is a statistical means of testing an assumption stated in a hypothesis. While the specific methodology leveraged depends on the nature of the hypothesis and data available, hypothesis testing typically uses sample data to extrapolate insights about a larger population.
When it comes to data-driven decision-making, there’s a certain amount of risk that can mislead a professional. This could be due to flawed thinking or observations, incomplete or inaccurate data , or the presence of unknown variables. The danger in this is that, if major strategic decisions are made based on flawed insights, it can lead to wasted resources, missed opportunities, and catastrophic outcomes.
The real value of hypothesis testing in business is that it allows professionals to test their theories and assumptions before putting them into action. This essentially allows an organization to verify its analysis is correct before committing resources to implement a broader strategy.
As one example, consider a company that wishes to launch a new marketing campaign to revitalize sales during a slow period. Doing so could be an incredibly expensive endeavor, depending on the campaign’s size and complexity. The company, therefore, may wish to test the campaign on a smaller scale to understand how it will perform.
In this example, the hypothesis that’s being tested would fall along the lines of: “If the company launches a new marketing campaign, then it will translate into an increase in sales.” It may even be possible to quantify how much of a lift in sales the company expects to see from the effort. Pending the results of the pilot campaign, the business would then know whether it makes sense to roll it out more broadly.
Related: 9 Fundamental Data Science Skills for Business Professionals
1. alternative hypothesis and null hypothesis.
In hypothesis testing, the hypothesis that’s being tested is known as the alternative hypothesis . Often, it’s expressed as a correlation or statistical relationship between variables. The null hypothesis , on the other hand, is a statement that’s meant to show there’s no statistical relationship between the variables being tested. It’s typically the exact opposite of whatever is stated in the alternative hypothesis.
For example, consider a company’s leadership team that historically and reliably sees $12 million in monthly revenue. They want to understand if reducing the price of their services will attract more customers and, in turn, increase revenue.
In this case, the alternative hypothesis may take the form of a statement such as: “If we reduce the price of our flagship service by five percent, then we’ll see an increase in sales and realize revenues greater than $12 million in the next month.”
The null hypothesis, on the other hand, would indicate that revenues wouldn’t increase from the base of $12 million, or might even decrease.
Check out the video below about the difference between an alternative and a null hypothesis, and subscribe to our YouTube channel for more explainer content.
Statistically speaking, if you were to run the same scenario 100 times, you’d likely receive somewhat different results each time. If you were to plot these results in a distribution plot, you’d see the most likely outcome is at the tallest point in the graph, with less likely outcomes falling to the right and left of that point.
With this in mind, imagine you’ve completed your hypothesis test and have your results, which indicate there may be a correlation between the variables you were testing. To understand your results' significance, you’ll need to identify a p-value for the test, which helps note how confident you are in the test results.
In statistics, the p-value depicts the probability that, assuming the null hypothesis is correct, you might still observe results that are at least as extreme as the results of your hypothesis test. The smaller the p-value, the more likely the alternative hypothesis is correct, and the greater the significance of your results.
When it’s time to test your hypothesis, it’s important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests , or one-tailed and two-tailed tests, respectively.
Typically, you’d leverage a one-sided test when you have a strong conviction about the direction of change you expect to see due to your hypothesis test. You’d leverage a two-sided test when you’re less confident in the direction of change.
To perform hypothesis testing in the first place, you need to collect a sample of data to be analyzed. Depending on the question you’re seeking to answer or investigate, you might collect samples through surveys, observational studies, or experiments.
A survey involves asking a series of questions to a random population sample and recording self-reported responses.
Observational studies involve a researcher observing a sample population and collecting data as it occurs naturally, without intervention.
Finally, an experiment involves dividing a sample into multiple groups, one of which acts as the control group. For each non-control group, the variable being studied is manipulated to determine how the data collected differs from that of the control group.
Hypothesis testing is a complex process involving different moving pieces that can allow an organization to effectively leverage its data and inform strategic decisions.
If you’re interested in better understanding hypothesis testing and the role it can play within your organization, one option is to complete a course that focuses on the process. Doing so can lay the statistical and analytical foundation you need to succeed.
Do you want to learn more about hypothesis testing? Explore Business Analytics —one of our online business essentials courses —and download our Beginner’s Guide to Data & Analytics .
Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.
A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.
Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data.
To test the validity of the claim or assumption about the population parameter:
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.
Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing.
One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.
There are two types of one-tailed test:
A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.
Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]
To delve deeper into differences into both types of test: Refer to link
In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.
Null Hypothesis is True | Null Hypothesis is False | |
---|---|---|
Null Hypothesis is True (Accept) | Correct Decision | Type II Error (False Negative) |
Alternative Hypothesis is True (Reject) | Type I Error (False Positive) | Correct Decision |
Step 1: define null and alternative hypothesis.
State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ), suggesting an effect or difference.
We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.
Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.
Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.
The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.
There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.
We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.
T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.
In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.
Comparing the test statistic and tabulated critical value we have,
Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.
We can also come to an conclusion using the p-value,
Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.
At last, we can conclude our experiment using method A or B.
To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .
When population means and standard deviations are known.
[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]
T test is used when n<30,
t-statistic calculation is given by:
[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]
Chi-Square Test for Independence categorical Data (Non-normally distributed) using:
[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]
Let’s examine hypothesis testing using two real life situations,
Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.
Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.
If the evidence suggests less than a 5% chance of observing the results due to random variation.
Using paired T-test analyze the data to obtain a test statistic and a p-value.
The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.
t = m/(s/√n)
then, m= -3.9, s= 1.8 and n= 10
we, calculate the , T-statistic = -9 based on the formula for paired t test
The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.
thus, p-value = 8.538051223166285e-06
Step 5: Result
Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.
Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.
Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.
We will implement our first real life problem via python,
import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )
T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.
In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05.
Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.
Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.
Populations Mean = 200
Population Standard Deviation (σ): 5 mg/dL(given for this problem)
As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.
The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] and we get accordingly , Z =2.039999999999992.
Step 4: Result
Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL
import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )
Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.
Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.
1. what are the 3 types of hypothesis test.
There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.
Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.
Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.
Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.
Similar reads.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Scientific Reports volume 14 , Article number: 18610 ( 2024 ) Cite this article
Metrics details
The pollution haven hypothesis (PHH) is defined as follows: a reduction in trade costs results in production of pollution-intensive goods shifting towards countries with easier environmental laws. The previous studies examined this hypothesis in the form of Kuznets' environmental hypothesis. In this way, they test the effect of foreign direct investment (FDI) on carbon emissions. However, this study investigates PHH from a new perspective. I will use Newton's gravity model to test this hypothesis. The basis of PHH is the difference in the environmental standards of the two business partners. One of the indicators used to measure the severity of a country's environmental laws is carbon emission intensity. The stricter the country's laws are, the lower the index value will be. In order to test the hypothesis, experimental data from China and OECD countries are used. China was as the pollution haven for the countries of the Organization for Economic Cooperation and Development. I found that environmental laws of host and guest countries have different effects on FDI. In addition, transportation costs have a negative effect on the FDI flow. Finally, the research results confirm the hypothesis on gravity model.
Introduction.
The discussion on link between environment and trade started in 1970’s. This debate become serious in 1990’s when trade openness was expanded by different organizations like North American Free Trade Agreement (NAFTA). Copeland and Taylor 1 first introduced the PHH in the context of North–South trade under NAFTA. It was the first article that links the environmental rules severity and trade models with the level of pollution in a country 2 . They proved in the first and second propositions that the higher income country chooses harder environmental safekeeping, and specializes in relatively clean commodities 1 . These two propositions are actually the pollution haven hypothesis. As stated by the PHH, the movement of the unclean industries from advanced to developing countries happen by way of the trade of commodities and foreign direct investment (FDI) 2 . Two important factors are the basis of the pollution haven hypothesis. The first is foreign investment, and the second is environmental laws. The first factor, foreign direct investment (FDI) is an impartible item of an open and effective international economic process and an important catalyst to development 3 . FDI flow is an increase in the book value of the net worth of investments in one country held by investors of another country, where the investments are under the managerial control of the investors 4 . Developing and emerging economies have come passing through increasingly to see FDI as a factor of economic development and advance, income growth and employment 3 . The mutual connection between trade and FDI is an important feature of globalization. Empirical study shows that, until the mid-1980s, international trade generated direct investment. After this era, the cause-and-effect relationship has converted and direct investment has a huge impact on international trade 3 . In the today's economies, trade has an increasingly important effect in shaping economic and social performance and prospects of countries around the world, especially those of developing countries 5 .
The second factor was environmental laws (EL). Generally, international trade has two consequences on the environment. First, trade can improve environmental quality by exporting clean technologies from developed countries to developing countries. In fact, trade can improve environmental quality 6 , 7 . Second, international relations can increase pollution. More developed countries pay more attention to environmental standards. In these countries, with the increase in economic growth, people demand a higher quality of the environment. It is the opposite in less developed countries. They substitute higher economic growth for environmental quality. According to the Kuznets environmental curve, developed countries are located after the turning point, while less developed countries are located before this point. In other words, more developed countries have stricter environmental laws than less developed countries. If we want to have a definition of environmental laws: environmental rules is a series of policies or standards adopted by the government to maintain the environment 8 . Environmental regulations have effectively restricted the damages of enterprises by the environment and had an important role in protecting the environment 8 , 9 .
Paying more or less attention to environmental standards reminds us of the pollution haven hypothesis. The pollution haven hypothesis, which emerged in the 1990s, pivots on the relocation of polluting manufacturing from developed countries with hard environmental laws to developing countries with soft rules 10 . The pollution haven hypothesis tells that easy environmental rules in developing countries encourage investment in emission-intensive industries from developed countries, especially in the context of increasing numbers of countries committing to carbon neutrality before 2050 11 .
From the PHH standpoint, the stringent environmental rules in developed countries lead to relocate of the polluting industries from developed to developing countries and cause pollution to rise in developing countries 2 . According to the definition of PHP, what causes the movement of FDI between two countries is the severity of environmental laws. In other words, their environmental regulations determine the amount of FDI of two countries. As we mentioned earlier, PHP is Copeland and Talor 1 first and second presentation in their article. Therefore, the purpose of this article is to model the pollution haven hypothesis accurately. By carefully investigating this hypothesis and previous studies (Table 1 ), we found that there are two important gaps: first, none of them has included the two main factors of the hypothesis (FDI and EL) in their modeling simultaneously. Second, they assumed that FDI has an effect on EL. That is, they considered the FDI variable as independent. While FDI should be considered as dependent variable. Therefore, this research will make future researchers have a more accurate view of PHP. They can examine the effects of various social, economic, environmental and political variables in this new model. The framework presented by this paper allows researchers to better understand the distinction between the two environmental Kuznets and the pollution haven hypotheses.
So far, many researchers have investigated the pollution haven hypothesis. In these studies, different indicators have been used for the first factor of PHH. Some of them directly applied FDI, for example Usama and Tang 12 ; Solarin et al. 13 ; Benzerrouk et al. 7 ; Shijie et al. 14 ; Temurlenk and Lögün 15 ; Yilanci et al. 16 ; Ali Nagvi et al. 17 ; Chirilus and Costea 18 ; Campos‑Romero et al. 19 ; Liu et al. 20 ; Soto and Edeh 21 ; Ozcelik et al. 22 . Some researchers also used polluting goods and activities as proxies, here are some of them: Shen et al. 23 ; Sadik-Zada and Ferrari 24 ; Zhang and Wang 25 ; Bhat and Tantr 10 ; Moise 26 ; Hamaguchi 27 . In all studies, the first PHP factor is considered as an independent variable. These studies are briefly mentioned in Table 1 .
By reviewing previous research, the innovation of this study is the methodology section. The research methodology is based on Newton's gravity model. I will present the pollution haven hypothesis in terms of Newton's gravity model. This model is widely used in trade research. There are always two partners in business discussions. One country will be the importer and the other the exporter. The pollution haven hypothesis is also one of the business categories. The importer will become a pollution haven for the exporting country. In fact, the host country will become a trading partner's haven for the investment of polluting industries.
I selected the host country based on the pollution emission and foreign direct investment data in 2020 (Figs. 1 , 2 ).
( a ) Ten countries with the most CO 2 (kt) emissions in 2020, ( b ) ten countries with the most share of CO 2 emissions (percentage) in 2020.
( a ) Ten countries with the highest foreign direct investment (FDI) net inflows in 2020, ( b ) ten countries with the highest share of foreign direct investment (FDI) inflows in 2020.
Countries have different levels of CO 2 emissions based on their activities. As Fig. 1 a shows, China had the highest CO 2 emissions in 2020 (about 13 million kilotons). On the other hand, China's share of the world's total emission is more than 29% (Fig. 1 b). United States is next with a share of 12%. According to Fig. 1 b, China's share is two and a half times that of United States.
Foreign direct investment is an important indicator to determine the pollution haven. We reviewed the FDI countries of the world in 2020. The results of the investigation are shown in Fig. 2 a,b. Figure 2 a indicates that China has the highest FDI inflows in 2020. China had 2.5 thousand billion FDI net inflows in 2020 (Billions of United States dollars). According to Fig. 2 b, China's FDI share in 2020 was 21%.
After introduction, the results section is discussed in detail. The research findings were divided into three categories: 1. the validity of the pollution haven hypothesis. 2. The effect of control variables on foreign direct investment. 3. The effect of main or independent variables on foreign direct investment. Generally, results showed that the severity of environmental laws has different effects on FDI flow. Since the direction of FDI flow from OECD countries is to China, increasing the severity of the environmental regulations of the guest countries will increase FDI. On the other hand, more environmental laws of China (the host) reduce the flow of FDI.
This section presents and discusses the main findings of the empirical analysis. In this research, I investigated pollution haven hypothesis based on gravity model approach. The variables that were collected included CO 2 emission (World Bank), GDP (World Bank), trade costs (World Trade Organization), FDI inflows from OECD to China (Organization for Economic Co-operation and Development data), urbanization is represented by the number of individuals living in cities (World Bank), trade openness, whose calculation formula is as: X + M/GDP, where X = Exports; M = Imports) (World Bank) and share of manufacturing (World Bank). Table 2 indicates the measurement unit of the variables and their sources.
Because data has two dimensions (cross-section and period), I used the F-Limer test to determine whether the data is a panel. The null hypothesis was rejected based on the pool model and the model with panel data was accepted, Therefore, I used panel regression. Then, Hausman test was used to test the type effects (random or fixed). The result showed that there was a random effect for both cross-section and period. In the following, I investigated stationary of the variables to prevent spurious regression. Levin, Lin and Chu test, examined four variables. The result showed that variables are stationary at level (intercept and trend).
Finally, I estimated the model based on panel data. The regression results has been shown in Table 3 . FDI ijt is the dependent variable. ER it , ER jt , TC 2 ijt , \({\text{lnUR}}_{\text{jt}}\) , \({\text{lnTO}}_{\text{jt}}\) and \({\text{lnShM}}_{\text{jt}}\) are independent variables. Table 3 indicates that all the coefficients are significant at the level of 5% and R 2 was 0.79. R 2 shows that the independent variables were able to explain 79% of the changes in the dependent variable.
The coefficient for \({lnER}_{it}\) is − 0.54. The negative coefficient sign shows that with increasing \({ER}_{it}\) , FDI ijt will decrease. In other words, when host countries' environmental regulations become easier, FDI flows from host countries (OECD) to host countries (China) will decrease. The coefficient for \({lnER}_{jt}\) is 0.90. The positive sign indicates that if \({ER}_{jt}\) increases, FDI ijt also increase. The results related to the effect of environmental laws on trade were similar to the previous studies. For example, in Shen et al. 23 study, the coefficient sign for environmental regulation was positive. Sadik-Zada and Ferrari 24 indicated that environmental policy stringency has a positive effect on carbon trade. Bhat and Tantr 10 concluded that environmental policy has a positive effect on pollution-intensive exports. The coefficient for transportation costs was obtained − 0.11. In fact, with the increase in transportation costs, the FDI flow from i (guest country) to j (host country) decreases. The \({\text{lnTC}}_{\text{ijt}}^{2}\) coefficient sign in present study is consistent with the following studies: Nuroğlu and Kunst 28 ; Wang et al. 29 ; Golovko and Sahin 30 ; Wani and Yasmin 31 . Among the control variables, the urbanization coefficient was not significant. The coefficient for the TO and ShM was positive and significant. The coefficient for the TO was positive. It means that as TO increases, FDI also increases. Benzerrouk et al. 7 indicated that an increase in trade and FDI increases the developed countries’ polluting projects which are destined for the developing countries. Moise 26 showed that trade openness statistically and significantly increase CO2 emission. In addition, the coefficient value for the ShM was estimated positive. This coefficient states that if the share of manufacturing in GNP increases, foreign direct investment will increase. Shijie et al. 14 concluded the positive effect of FDI on the environment of dominant industrial agglomeration is increasing first and then decreasing. Sawhney and Rastogi 32 indicated that that the increase in trade liberalization, the growth of American industries and FDI has increased the emission of pollution in India.
Our model is in terms of logarithms, so the coefficients express elasticities. \({LnER}_{it}\) and \({ln ER}_{jt}\) coefficient were − 0.54 and 0.9, respectively. That is, if the environmental laws of guest country are tightened by 1%, FDI flow from the guest country to the host country will decrease by 54%. In addition, if environmental laws of the host country are relaxed by 1%, FDI flow from the guest country to the host country will increase by 90%. The \({lnTC}_{ijt}^{2}\) coefficient was − 0.11 that indicates with 1% increase in transportation costs, the FDI flow from the guest country to the host country decreases by 11%. The coefficient for the control variables \({lnTO}_{jt}\) and \({lnShM}_{jt}\) were 0.77 and 0.51, respectively. These coefficients state that if trade openness and share of manufacturing increase by 1%, foreign direct investment will increase by 71 and 55%. Thus, increasing trade openness reveals that lax environmental enforcement in developing countries attracts investment in emission-intensive industries from developed countries.
This paper fills a research gap by assessing pollution haven hypothesis based on its initial assumptions. In previous studies, this hypothesis was tested in the form of Kuznets curve. While the concept of pollution haven is due to differences in environmental laws. While the concept of pollution haven is foreign investment flow between the haven seeker and the haven giver. The main driver of which is the difference in environmental laws. When we talk about flow, the gravity model is the best option, for example: power flow, trade flow, labor flow, FDI flow (see Fig. 4 in the method section). Trade has permitted countries with more emission intensities to export goods or investment to countries with less emission intensities, which may result an increase in worldwide carbon emissions 11 . In this research, we examined foreign direct investment between OECD countries and China. In fact, we investigated the effect of environmental laws on FDI in the form of the pollution haven hypothesis. The indicator chosen for the environmental regulations was emission intensity. Figure 3 shows carbon emission intensity of guest (OECD) and host (China) countries in 2016–2020 (kt/10 billion $). Figure 3 (1–9) is for OECD countries and Fig. 3 (10) is for China. As Fig. 3 indicates, emission intensity has been decreasing in all selected countries in 2016–2020.
Carbon emission intensity of guest (OECD) and host (China) countries in 2016–2020. (1–9) is for OECD countries. (10) is also for China. Carbon emission intensity in kilotons per 10 billion dollars.
The emission intensity is in range (782–5439) for OECD countries, while it is in (5510–6308) kilotons per 10 billion dollars for China. The maximum emission intensity of OECD countries is lower than the minimum emission intensity of China. It means that the environmental rules of guest countries are stricter than China. Therefore, the pollution haven hypothesis discloses that weak environmental implement in developing countries absorbs investment in emission-intensive industries from developed countries 11 . The purpose of this paper is to model the pollution haven hypothesis in the form of gravity model. For this purpose, the effect of environmental laws of host and guest countries on FDI is investigated. The results showed that the severity of environmental laws has different effects on FDI flow. Since the direction of FDI flow from OECD countries is to China, increasing the severity of the environmental regulations of the guest countries will increase FDI. On the other hand, more environmental laws of China (the host) reduce the flow of FDI. After presenting the results and comparing them with previous studies, the results should be tested for robustness. The main empirical findings are robust to two different methods of multicollinearity tests (i) Cross-correlation across variables (ii) Variance inflation factor (VIF) of each variable 33 . VIF is a measure of the amount of multicollinearity in regression model. Multicollinearity exists when there is a correlation between multiple independent variables.
The variance inflation factor is calculated as follows:
where \({R}_{i}^{2}\) is the variance explained by the regression model (i is counter of explanatory variable). On the other hand, \({R}_{i}^{2}\) represents the regression of the predictor of interest on the remaining predictors. The VIF values cannot be less than 1.0, since 1.0 represents the ideal situation of no correlation with other predictors. Also implied is that VIF cannot be negative. The minimum VIF can be is 1.0. A VIF of 1.0 can only occur when \({R}_{i}^{2}\) is equal to 0, which implies that the given predictor has zero linear relationship with other predictors in the model. Tolerance is simply the reciprocal of VIF and is thus computed as
whereas large values of VIF are undesirable, large tolerances are preferable to smaller ones. It stands as well that the maximum value of tolerance must be 1.0 34 . As shown in Table 4 , the mean variance inflation factor (VIF) values in our model is equal to 2.34. The maximum VIF amount of explanatory variables is 4.62, which lie within the acceptable standard.
In this study, the pollution haven hypothesis was investigated from a new perspective. In fact, this hypothesis was formulated based on theoretical foundations. Copeland and Taylor 1 first introduced the PHH. It was the first article that links the environmental rules severity and trade models with the level of pollution in a country. According to Copeland and Taylor 1 's article, two factors of foreign direct investment and environmental laws constitute this hypothesis. In their study, it is stated that the environmental laws of countries determine the attraction of foreign direct investment. Therefore, we considered FDI between two countries as a function of their environmental laws. To achieve the research objectives, the commercial gravity model was used. The research innovation is first, none of authors has included the two main factors of the hypothesis (FDI and EL) in their modeling simultaneously. Second, they assumed that FDI has an effect on EL. That is, they considered the FDI variable as independent. While FDI should be considered as dependent variable. Therefore, this research will make future researchers have a more accurate view of PHP. The results showed that if environmental laws of guest country are tightened by 1%, FDI flow from the guest country to the host country will decrease by 54%. In addition, if environmental laws of the host country are relaxed by 1%, FDI flow from the guest country to the host country will increase by 90%. The \({lnTC}_{ijt}^{2}\) coefficient was − 0.11 that indicates with 1% increase in transportation costs, the FDI flow from the guest country to the host country decreases by 11%. The coefficient for the control variables \({lnTO}_{jt}\) and \({lnShM}_{jt}\) were 0.77 and 0.51, respectively. These coefficients state that if trade openness and share of manufacturing increase by 1%, foreign direct investment will increase by 71 and 55%. After presenting the results and comparing them with previous studies, the results should be tested for robustness. The mean VIF values in our model is equal to 2.34. The maximum VIF amount of explanatory variables is 4.62, which lie within the acceptable standard.
Based on the results of the research, suggestions are provided. But rigidity in environmental law could lead to reduce in FDI. On the other hand, FDI is a key factor for economic growth and development; hence, FDI can be transferred from polluting sectors to clean sectors, such as service sectors, labor-intensive industries or renewable energy sectors, and green technology investment, should be encouraged. The manufacturing sector is the largest contributor to global emissions when direct and indirect emissions are included. The key transformations needed to bring the industry sector towards environmentally friendly goals. These aims can include electrifying industry, transform production processes, using new fuels, accelerating material efficiency and scaling up energy efficiency everywhere, and promote circular material flow. Openness trade, like industry growth, increases FDI. Furthermore, to decrease the impact of trade openness and economic growth on environmental sustainability, it is very important to increase environmental friendly production system industries that could motivate green technology knowledge for all economic sectors. The receiving countries should improve their mechanism of absorption ability. The paper author has suggestions for future researchers. The regression model that was chosen is a linear model. Therefore, future authors can use other regression methods such as spatial regression and get results that are more accurate. Because one of the variables of the attraction model is the distance between countries. The dependent variable is FDI, which is also affected by key qualitative factors. In future studies, the effect of these qualitative variables can be investigated, such as government structure and investment management factors.
This section contains information about the empirical model. The empirical model is borrowed from common literature on the gravity model. In economic sciences, the gravity model forecasts bilateral trade flows based on measure of the economies (usually using GDP) and distance between the two locations 33 as in Eq. ( 3 ):
In above equation, the gravitational power (G or amount of trade between regions) is positively proportional to the size of the regions ( si and sj ) and negatively proportional to the distance between region (i) and region (j) ( di , j ). In this research, the components of the gravity model are different. In fact, the innovation of this study compared to previous researches is the different components of the gravity model. Figure 4 shows the innovation of this model with previous gravity models.
Newton's gravity model, Trade’s gravity model, FDI’s gravity model.
Two important factors in the pollution haven hypothesis are foreign direct investment between two countries and the severity of the countries' environmental laws. Differences in attention to the environmental quality coupled with trade liberalization may cause to the creation of pollution havens, with polluting activity relocating to areas with weak regulation 35 , 36 . If we consider gravity as FDI. What causes the attraction of foreign direct investment between two countries is the strictness of the countries in implementing environmental regulations. So Eq. ( 3 ) is modified as follows:
where i is country (OECD) i , j is country j (China) and t is time (2016–2020). FDI represents foreign direct investment flow among countries. ER is the severity of environmental laws. Co 2 is pollution emissions. The UR, TO and ShM are urbanization, trade openness and share of manufacturing. The TC denotes the trade costs between countries and GDP is gross domestic product. From Eq. ( 2 ), data is converted into log terms according to traditional methods in econometrics. The equation is established as follows (Eq. 6 ):
Many studies used FDI in their model, such as Usama and Tang 12 ; Solarin et al. 13 ; Benzerrouk et al. 7 ; Temurlenk and Lögün 15 ; Yilanci et al. 16 ; Ali Nagvi et al. 17 . Of course, they considered FDI as an independent variable. But in this study, it is considered as a dependent variable. In Eq. ( 6 ), \(ER\) is the pollution emission intensity, which is calculated in Eq. ( 5 ). In previous studies, several indicators have been used to measure environmental regulations. For example, Guo et al. 37 takes pollutant discharge fee and total investment in pollution controlling to represent environmental rules. Sun et al. 38 applied number of pollution enterprises for environmental rules. Nie et al. 39 make ISO14001 environmental management system certification. Xie et al. 40 used to fail to form fixed assets or form fixed assets for environmental regulation. Sadik-Zada and Ferrari 24 make used environmental policy stringency index as a proxy for PHH.
I measure these rules with the pollution intensity index like Cole and Elliott 41 's study, the proportion of pollution emissions in industrial output value can be used as a proxy for measuring environmental regulations 8 , 41 . In some studies for example Shen et al. 23 ; Bhat and Tantr 10 , emission intensity has been used as the degree of severity environmental regulations. TC also shows transportation costs, which is a proxy for the distance between countries. If we use the distance variable in the model, it would cause collinearity. This study includes annual data of 35 countries from OECD (Fig. 5 ) and China for 2016–2020. The reason for choosing these countries and period is as follows: first, China is the largest emitter of greenhouse gases in recent years. Second, China is the largest importer of foreign investment in recent years. Third, OECD countries were selected because exact information about their FDI inflow to China was available.
Selected countries from among the OECD countries that make foreign direct investment in China.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Copeland, B. R. & Taylor, M. S. North-South trade and the environment. Q. J. Econ. 109 , 755–787 (1994).
Article Google Scholar
Gill, F. L., Viswanathan, K. K. & Abdul Karim, M. Z. The critical review of the pollution haven hypothesis. Int. J. Energy Econ. Policy 8 (1), 167–174 (2018).
Google Scholar
OECD (Organization for Economic Co- Operation and Development). Foreign direct investment for development maximizing benefits, minimizing costs overview. https://www.oecd.org/investment/investmentfordevelopment/1959815.pdf (2002).
IMF (International Monetary Fund). Foreign direct investment in the world economy. file:///C:/Users/Hpe/Downloads/9781557754998-ch07.pdf (2000).
United Nations. Developing countries in international trade and development index. United Nations conference on trade and development. https://unctad.org/system/files/official-document/ditctab20051_en.pdf (2005).
Wang, Q. & Zhang, F. Free trade and renewable energy: A cross-income levels empirical investigation using two trade openness measures. Renew. Energy 168 , 1027–1039 (2021).
Benzerrouk, Z., Abid, M. & Sekrafi, H. Pollution haven or halo effect? A comparative analysis of developing and developed countries. Energy Rep. 7 , 4862–4871. https://doi.org/10.1016/j.egyr.2021.07.076 (2021).
Meng, F., Xu, Y. & Zhao, G. Environmental regulations, green innovation and intelligent upgrading of manufacturing enterprises: Evidence from China. Sci. Rep. 10 , 14485. https://doi.org/10.1038/s41598-020-71423-x (2020).
Article ADS PubMed PubMed Central Google Scholar
Jahanshahi, A. A. & Brem, A. Antecedents of corporate environmental commitments: The role of customers. Int. J. Environ. Res. Public Health 15 , 1191 (2018).
Bhat, V. & Tantr, M. L. Pollution haven hypothesis and the bilateral trade between India and China. J. Curr. Chin. Affairs 1 , 1–26. https://doi.org/10.1177/18681026231188450 (2023).
Meng, J. et al. The narrowing gap in developed and developing country emission intensities reduces global trade’s carbon leakage. Nat. Commun. 14 , 3775. https://doi.org/10.1038/s41467-023-39449-7 (2023).
Usama, A. M. & Tang, C. F. Investigating the validity of pollution haven hypothesis in the gulf cooperation council (GCC) countries. Energy Policy 60 , 813–819. https://doi.org/10.1016/j.enpol.2013.05.055 (2013).
Solarin, S. A., Al-Mulali, U., Musah, I. & Ozturk, I. Investigating the pollution haven hypothesis in Ghana: An empirical investigation. Energy 124 , 706–719. https://doi.org/10.1016/j.energy.2017.02.089 (2017).
Shijie, L., Hou, D., Jin, W. & Shahid, R. Impact of industrial agglomeration on environmental pollution from perspective of foreign direct investment—a panel threshold analysis for Chinese provinces. Environ. Sci. Pollut. Res. 28 (41), 58592–58605. https://doi.org/10.1007/s11356-021-14823-4 (2021).
Temurlenk, M. S. & Lögün, A. An analysis of the pollution haven hypothesis in the context of Turkey: A nonlinear approach. Econ. Bus. Rev. 8 (22), 5–23. https://doi.org/10.18559/ebr.2022.1.2 (2022).
Yilanci, V., Cutcu, I., Cayir, B. & Saglam, M. S. Pollution haven or pollution halo in the fishing footprint: Evidence from Indonesia. Mar. Pollut. Bull. 188 , 114626. https://doi.org/10.1016/j.marpolbul.2023.114626 (2023).
Article PubMed Google Scholar
Ali Nagvi, S. A. et al. Environmental sustainability and biomass energy consumption through the lens of pollution Haven hypothesis and renewable energy-environmental Kuznets curve. Renew. Energy 212 , 621–631. https://doi.org/10.1016/j.renene.2023.04.127 (2023).
Chirilus, A. & Costea, A. The effect of FDI on environmental degradation in Romania: Testing the pollution haven hypothesis. Sustainability 15 , 10733. https://doi.org/10.3390/su151310733 (2023).
Campos-Romero, H., Mourao, P. R. & Rodil-Marzabal, O. Is there a pollution haven in European Union global value chain participation? Environment. Dev. Sustain. https://doi.org/10.1007/s10668-023-03563-9 (2023).
Liu, P., Rahman, Z. U., Joźwik, B. & Doğan, M. Determining the environmental effect of Chinese FDI on the Belt and Road countries CO2 emissions: an EKC-based assessment in the context of pollution haven and halo hypotheses. Environ. Sci. Europe 36 (48), 1–12. https://doi.org/10.1186/s12302-024-00866-0 (2024).
Soto, G. H. & Edeh, J. Assessing the foreign direct investment-load capacity factor relationship in Spain: Can FDI contribute to environmental quality?. Environ. Dev. Sustain. https://doi.org/10.1007/s10668-024-04680-9 (2024).
Ozcelik, O. et al. Testing the validity of pollution haven and pollution halo hypotheses in BRICMT countries by Fourier Bootstrap AARDL method and Fourier Bootstrap Toda-Yamamoto causality approach. Air Qual. Atmos. Health. https://doi.org/10.1007/s11869-024-01522-5 (2024).
Shen, J., Wang, S., Liu, W. & Chu, J. Does migration of pollution-intensive industries impact environmental efficiency? Evidence supporting “Pollution Haven Hypothesis”. J. Environ. Manag. 242 , 142–152. https://doi.org/10.1016/j.jenvman.2019.04.072 (2019).
Sadik-Zada, E. R. & Ferrari, M. Environmental policy stringency, technical progress and pollution haven hypothesis. Sustainability 12 , 3880. https://doi.org/10.3390/su12093880 (2020).
Zhang, K. & Wang, X. Pollution haven hypothesis of global CO2, SO2, NOx, evidence from 43 economies and 56 sectors. Int. J. Environ. Res. Public Health 18 , 6552. https://doi.org/10.3390/ijerph18126552 (2021).
Article PubMed PubMed Central Google Scholar
Moise, M. L. Examining the agriculture-induced environment curve hypothesis and pollution haven hypothesis in Rwanda: The role of renewable energy. Moise Carbon Res. 2 (50), 1–14. https://doi.org/10.1007/s44246-023-00076-y (2023).
Article ADS Google Scholar
Hamaguchi, Y. A water pollution haven hypothesis in a dynamic agglomeration model for fisheries resource management. Environ. Dev. Sustain. https://doi.org/10.1007/s10668-024-04788-y (2024).
Nuroğlu, E. & Kunst, R. M. Competing specifications of the gravity equation: A three-way model, bilateral interaction effects, or a dynamic gravity model with time-varying country effects?. Empirical Econ. 46 (2), 733–741. https://doi.org/10.1007/s00181-013-0696-3 (2013).
Wang, Z. et al. Pollution haven hypothesis of domestic trade in China: A perspective of SO2 emissions. Sci. Total Environ. 663 , 198–205. https://doi.org/10.1016/j.scitotenv.2019.01.287 (2019).
Article ADS PubMed Google Scholar
Golovko, A. & Sahin, H. Analysis of international trade integration of Eurasian countries: Gravity model approach. Euras. Econ. Rev. 11 (3), 519–548. https://doi.org/10.1007/s40822-021-00168-3 (2021).
Wani, S. H. & Yasmin, E. India’s trade with South and Central Asia: An application of institution-based augmented gravity model. Future Bus J. 9 , 77. https://doi.org/10.1186/s43093-023-00257-6 (2023).
Sawhney, A. & Rastogi, R. Is India specialising in polluting industries? Evidence from US-India bilateral trade. World Econ. 38 (2), 360–378 (2015).
Cantore, N & Cheng, C. F. C. International trade of environmental goods in gravity models. J. Environ. Manag. 223 , 1047–1060 (2018).
Denis, D. J.. Multiple linear regression. Applied univariate, bivariate, and multivariate statistics, 286–315. https://doi.org/10.1002/9781119583004.ch (2021).
Copeland, B. R. & Taylor, M. S. Trade, growth, and the environment. J. Econ. Lit. 42 , 7–71 (2004).
Taylor, M. S. Unbundling the pollution haven hypothesis. Adv. Econ. Anal. Policy 4 (2), 1–26 (Reprinted in The Economics of Pollution Havens, Don Fullerton (Ed.), Edward Elgar Publishing, 2006) (2004).
Guo, W., Dai, H. & Liu, X. Impact of different types of environmental regulation on employment scale: An analysis based on perspective of provincial heterogeneity. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-020-10428-5 (2020).
Sun, W., Yang, Q., Ni, Q. & Kim, Y. The impact of environmental regulation on employment: An empirical study of China’s Two Control Zone policy. Environ. Sci. Pollut. Res. https://doi.org/10.1007/s11356-019-05840-5 (2019).
Nie, G.-Q., Zhu, Y.-F. & Wu, W.-P. Impact of voluntary environmental regulation on green technological innovation: Evidence from Chinese manufacturing enterprises. Front. Energy Res. 10 , 889037. https://doi.org/10.3389/fenrg.2022.889037 (2022).
Xie, R., Yuan, Y. & Huang, J. Different types of environmental regulations and heterogeneous influence on “green” productivity: Evidence from China. Ecol. Econ. 132 , 104–112. https://doi.org/10.1016/j.ecolecon.2016.10.019 (2017).
Cole, M. A. & Elliott, R. J. Do environmental regulations influence trade patterns? Testing old and new trade theories. World Econ. 26 , 1163–1186 (2003).
Download references
Authors and affiliations.
Department of Agricultural Economics, Tarbiat Modares University, Tehran, Iran
Somayeh Avazdahandeh
You can also search for this author in PubMed Google Scholar
Somayeh Avazdahandeh: conceived and designed the analysis; collected the data; contributed data or analysis tools; performed the analysis; write the paper.
Correspondence to Somayeh Avazdahandeh .
Competing interests.
The author declares no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Avazdahandeh, S. A new look at the CO 2 haven hypothesis using gravity model European Union and China. Sci Rep 14 , 18610 (2024). https://doi.org/10.1038/s41598-024-69611-0
Download citation
Received : 30 January 2024
Accepted : 07 August 2024
Published : 10 August 2024
DOI : https://doi.org/10.1038/s41598-024-69611-0
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.
From diagnosis to dialogue – reconsidering the dsm as a conversation piece in mental health care: a hypothesis and theory.
The Diagnostic and Statistical Manual of Mental Disorders, abbreviated as the DSM, is one of mental health care’s most commonly used classification systems. While the DSM has been successful in establishing a shared language for researching and communicating about mental distress, it has its limitations as an empirical compass. In the transformation of mental health care towards a system that is centered around shared decision-making, person-centered care, and personal recovery, the DSM is problematic as it promotes the disengagement of people with mental distress and is primarily a tool developed for professionals to communicate about patients instead of with patients. However, the mental health care system is set up in such a way that we cannot do without the DSM for the time being. In this paper, we aimed to describe the position and role the DSM may have in a mental health care system that is evolving from a medical paradigm to a more self-contained profession in which there is increased accommodation of other perspectives. First, our analysis highlights the DSM’s potential as a boundary object in clinical practice, that could support a shared language between patients and professionals. Using the DSM as a conversation piece, a language accommodating diverse perspectives can be co-created. Second, we delve into why people with lived experience should be involved in co-designing spectra of distress. We propose an iterative design and test approach for designing DSM spectra of distress in co-creation with people with lived experience to prevent the development of ‘average solutions’ for ‘ordinary people’. We conclude that transforming mental health care by reconsidering the DSM as a boundary object and conversation piece between activity systems could be a step in the right direction, shifting the power balance towards shared ownership in a participation era that fosters dialogue instead of diagnosis.
The Diagnostic Statistical Manual of Mental Disorders (DSM) has great authority in practice. The manual, released by the American Psychiatric Association (APA), provides a common language and a classification system for clinicians to communicate about people’s experiences of mental distress and for researchers to study social phenomena that include mental distress and its subsequent treatments. Before the DSM was developed, a plethora of mental health-related documents circulated in the United States ( 1 ). In response to the confusion that arose from this diversity of documents, the APA Committee on Nomenclature and Statistics standardized these into one manual, the DSM-I ( 2 ). In this first edition of the manual, released in 1952, mental distress was understood as a reaction to stress caused by psychological and interpersonal factors in the person’s life ( 3 ). Although the DSM-I had limited impact on practice ( 4 ), it did set the stage for increasingly standardized categorization of mental disorders ( 5 ).
The DSM-II was released in 1968. In this second iteration, mental disorders were understood as the patient’s attempts to control overwhelming anxiety with unconscious, intrapsychic conflicts ( 3 ). In this edition, the developers attempted to describe the symptoms of disorders and define their etiologies. They had chosen to base them predominantly on psychodynamic psychiatry but also included the biological focus of Kraepelin’s system of classification ( 5 , 6 ). During the development of the DSM-III, the task force added the goal to improve the reliability — the likelihood that different professionals arrive at the same diagnosis — of psychiatric diagnosis, which now became an important feature of the design process. The developers abandoned the psychodynamic view and shifted the focus to atheoretical descriptions, aiming to specify objective criteria for diagnosing mental disorders ( 3 ). Although it was explicitly stated in DSM-III that there was no underlying assumption that the categories were validated entities ( 7 ), the categorical approach still assumed each pattern of symptoms in a category reflected an underlying pathology. The definition of ‘mental illness’ was thereby altered from what one did or was (“you react anxious/you are anxious”) to something one had (“you have anxiety”). This resulted in descriptive, criteria-based classifications that reflected a perceived need for standardization of psychiatric diagnoses ( 5 , 6 ). The DSM-III was released in 1980 and had a big impact on practice ( 6 ) as it inaugurated an attempt to “re-medicalize” American psychiatry ( 5 ).
In hindsight, it is not surprising that after the release of the DSM-III, the funding for psychopharmacological research skyrocketed ( 8 ). At the same time, the debate on the relationship between etiology and description in psychiatric diagnosis continued ( 9 ). As sociologist Andrew Scull ( 10 ) showed, the election of President Reagan prompted a shift towards a focus on biology. His successor, President Bush, claimed that the 1990s were ‘the decade of the brain,’ which fueled a sharp increase in funding for research on genetics and neuroscience ( 10 ). Despite the public push for biological research, the DSM-IV aimed to arrive at a purely atheoretical description of psychiatric diagnostic criteria and was released in 1994 ( 11 ). The task force conducted multi-center field trials to relate diagnoses to clinical practice to improve reliability, which remained a goal of the design process ( 12 ). While the DSM-IV aimed to be atheoretical, researchers argued that the underlying ontologies were easily deducible from their content: psychological and social causality were eliminated and replaced implicitly with biological causality ( 13 ). In the DSM-5, validity — whether a coherent syndrome is being measured and whether it is what it is assumed to be — took center stage ( 10 ). The definition of mental disorder in the DSM-5 was thereby conceptualized as:
“… a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning.” ( 14 ).
With the release of the DSM-5, the debate surrounding the conceptualization of mental distress started all over again, but this can be best seen as re-energizing longstanding debates around the utility and validity of APA nosology ( 15 ). Three important design goals from the DSM-III until current editions can be observed: providing an international language on mental distress, developing a reliable classification system, and creating a valid classification system.
The extent to which these three design goals were attained is only partial. The development of an international language has been accomplished, as the DSM (as well as the International Classification of Diseases) is now widely employed across most Western countries. Although merely based on consensus, the DSM enables — to an extent — professionals and researchers to quantify the prevalence of certain behaviors and find one or more classifications that best suit these observed behaviors. To this date, the expectation that diagnostic criteria would be empirically validated through research has not yet been fulfilled ( 10 , 16 , 17 ). As stated by the authors of the fourth edition ( 11 ), the disorders listed in the DSM are “valuable heuristic constructs” that serve a purpose in research and practice. However, it was already emphasized in the DSM-IV guidebook that they do not precisely depict nature as it is, being characterized as not “well-defined entities” ( 18 ). Furthermore, while the fifth edition refers to “syndromes,” it is again described that “there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or from no mental disorder” ( 14 ). Consequently, there are no laboratory tests or biological markers to set the boundary between ‘normal’ and ‘pathological,’ thus, it cannot confirm or reject the presumed pathologies underlying the DSM classifications, thereby rendering the validity goal of the design unattained. Therefore, the reliability of the current major DSM (i.e., DSM-5) still raises concerns ( 19 ).
By focusing conceptually on mental distress as an individual experience, the DSM task forces have neglected the role of social context, potentially restricting a comprehensive clinical understanding of mental distress ( 20 ). There is mounting evidence and increased attention, however, that the social environment, including its determinants and factors, is crucial for the onset, course, and outcome of mental distress ( 21 – 27 ). Moreover, exposure to factors such as early life adversity, poverty, unemployment, trauma, and minority group position is strongly associated with the onset of mental distress ( 28 , 29 ). It is also established that the range of ontological perspectives — what mental distress is and how it exists — is far broader than what is typically covered in prevailing scientific and educational discussions ( 30 ). These diverse perspectives are also evident in the epistemic pluralism among theoretical models on mental health problems ( 31 ).
In the context of contemporary transformations in mental health care, the role of the DSM as an empirical instrument becomes even more problematic. In recent years, significant shifts have been witnessed in mental health care services, with a growing focus on promoting mental well-being, preventive measures, and person-centered and rights-based approaches ( 32 ). In contrast to the 1950s definition of health in which health was seen as the absence of disease, health today is defined as “the ability to adapt and to self-manage” ( 33 ), also known as ‘positive health.’ Furthermore, the recovery movement ( 34 ), person-centered care ( 35 ), and the integration of professionals’ lived experiences ( 36 ) all contributed to a more person-centered mental health care that promotes shared-decision making as a fundamental principle in practice in which no one perspective holds the wisdom. Shared decision-making is “an approach where clinicians and patients share the best available evidence when faced with the task of making decisions, and where patients are supported to consider options, to achieve informed preferences” ( 37 ). To realize and enable a more balanced relationship between professional and patient in shared decision-making, the interplay of healthcare professionals’ and patients’ skills, the support for a patient, and a good relationship between professional and patient are important to facilitate patients’ autonomy ( 38 ). Thus, mental health care professionals in the 21st century should collaborate, embrace ideography, and maximize effects mediated by therapeutic relationships and the healing effects of ritualized care interactions ( 39 ).
The DSM and its designed classifications, as well as their use in the community, can hinder a person-centered approach in which meaning is collaboratively derived for mental health issues, where a balanced relationship is needed, and where decisions are made together. We can demonstrate this with a brief example involving the ADHD classification and its criteria, highlighting how its design tends to marginalize individuals with mental distress, reducing their behavior to objectification from the clinician’s viewpoint. The ADHD classification delineates an ideal self that highly esteems disengagement from one’s feelings and needs, irrespective of contextual factors ( 40 ). This inclination is apparent in the criteria, including criterion 1a concerning inattention: “often avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort”. This indicates that disliking something is viewed as a symptom rather than a personal preference ( 40 ). Due to a lack of attention to the person’s meaning, a behavior that may be a preference of the individual can become a symptom of a disease. Another instance can be observed in criterion 2c: “often runs about or climbs in situations where it is inappropriate.” Although such behavior might be deemed inappropriate in certain contexts, many individuals derive enjoyment from running and climbing. In this way, ‘normal’ human behavior can be pathologized because there is no room for the meaning of the individual.
A parallel disengagement is evident in the DSM’s viewpoint on individuals with mental distress ( 40 ), as the diagnostic process appears to necessitate no interaction with an individual; instead, it fosters disengagement rather than engagement. For example, according to the DSM-5, when a child is “engaged in especially interesting activities,” the clinician is warned that the ‘symptoms’ may not manifest. Although it appears most fitting to assist the child by exploring their interests, clinicians are instead encouraged to seek situations the child finds uninteresting and assess whether the child can concentrate ( 40 ). If the child cannot concentrate, a ‘diagnosis’ might be made, and intervention can be initiated. This highlights that the design of the DSM promotes professionals to locate individual disorders in a person at face value without considering contextual factors, personal preferences, or other idiosyncrasies in a person’s present or history ( 41 ). It is also apparent that the term ‘symptom’ in the DSM implies an underlying entity as its cause, obscuring that it is a subjective criterion based on human assessment and interpretation ( 42 ). These factors make it difficult for the DSM in its current form to have a place in person-centered mental health care that promotes shared decision-making.
Diagnostic manuals like the DSM function similarly to standard operating procedures: they streamline decision-making and assist professionals in making approximate diagnoses when valid and specific measures are lacking or not readily accessible ( 43 ). However, the DSM is often (mis)used as a manual providing explanations for mental distress. This hinders a personalized approach that prioritizes the patient’s needs. Furthermore, this approach does not align with the principles of shared decision-making, as the best available evidence indicates that classifications are not explanations for mental distress. Also, disengagement is promoted in the design of the DSM, which is problematic in the person-centered transformation of mental health care in which a range of perspectives and human-centered interventions are needed. This paper aims to describe the position and role the DSM may have in a mental health care system that is evolving from a medical paradigm to a more self-contained profession in which there is increased accommodation of other perspectives. For this hypothesis and theory paper, we have formulated the following hypotheses:
(1) Reconsidering the DSM as a boundary object that can be used as a conversation piece allows for other perspectives on what is known about mental distress and aligns with the requirements of person-centered mental health care needed for shared decision-making;.
(2) Embracing design approaches in redesigning the DSM to a conversation piece that uses spectra of mental distress instead of classifications will stimulate the integration of diverse perspectives and voices in reshaping mental health care.
The DSM originally aimed to develop a common language, and it has achieved that to some extent, but it now primarily serves as a common language among professionals. This does not align with the person-centered transformation in mental health care, where multiple perspectives come into play ( 32 , 44 ). In this section, we will address our first hypothesis: reconsidering the DSM as a boundary object that can be used as a conversation piece allows for other perspectives on what is known about mental distress and aligns with the requirements of person-centered mental health care needed for shared decision-making. First, we will examine several unintended consequences of classifications. After that, we propose considering the DSM as boundary objects to arrive at a real common language in which the perspective of people with lived experience is promoted. This perspective views the DSM as a conversation piece that can be used as a subject, the meaning of which can be attributed from various perspectives where the premise is that there is not an omniscient perspective.
Classifications influence what we see or do not see, what is valorized, and what is silenced ( 45 ). DSM classifications and the process of getting them can provide validation and relief for some service users, while for others, it can be stigmatizing and distressing ( 46 , 47 ). The stigma people encounter can be worse than the mental problems themselves ( 48 ). The classification of people’s behaviors is not simply a passive reflection of pre-existing characteristics but is influenced by social and cultural factors. The evolution of neurasthenia serves as a fascinating illustration of the notable ontological changes in the design of the DSM, constantly reflecting and constructing reality. Initially, neurasthenia was considered a widespread mental disorder with presumed somatic roots. Still, it was subsequently discarded from use, only to resurface several decades later as a culture-bound manifestation of individual mental distress ( 49 ). Consequently, certain mental disorders, as depicted in the DSM, may not have existed in the same way as before the classifications were designed. This has been called ‘making up people’, which entails the argument that different kinds of human beings and human acts come into being hand in hand with our invention of the categories labeling them ( 50 ). Furthermore, it is important to consider that whether behavior is deemed dysfunctional or functional is always influenced by the prevailing norms and traditions within a specific society at a given time. Therefore, the individual meaning of the patient in its context is always more important than general descriptions and criteria of functional and dysfunctional behavior (i.e., ADHD climbing example).
Individuals might perceive themselves differently and develop emotions and behaviors partly due to the classifications imposed upon them. Over time, this can result in alterations to the classification itself, a phenomenon referred to as the classificatory looping effect ( 51 ). Moreover, when alterations are made to the world that align with the system’s depiction of reality, ‘blindness’ can occur ( 45 ). To illustrate, let’s consider an altered scenario of Bowker and Star ( 45 ) in which all mental distress is categorized solely based on physiological factors. In this context, medical frameworks for observation and treatment are designed to recognize physical manifestations of distress, such as symptoms, and the available treatments are limited to physical interventions, such as psychotropic medications. Consequently, in such a design, mental distress may solely be a consequence of a chemical imbalance in the brain, making it nearly inconceivable to consider alternative conceptualizations or solutions. Thus, task forces responsible for designing mental disorder classifications should be acutely aware that they actively contribute to the co-creation of reality with the classifications they construct upon reality ( 49 ).
Another unintended consequence is the reification of classifications. Reification involves turning a broad and potentially diverse range of human experiences into a fixed and well-defined category. Take, for example, the case of the classification of ADHD and its reification mechanisms (i.e., language choice, logical fallacies, genetic reductionism, and textual silence) ( 42 ). Teachers sometimes promote the classification of ADHD as they believe it acknowledges a prior feeling that something is the matter with a pupil. The classification is then seen as a plausible explanation for the emergence of specific behaviors, academic underperformance, or deviations from the expected norm within a peer group ( 52 , 53 ). At first glance, this may seem harmless. However, it reinforces the notion that a complex and multifaceted set of contextual behaviors, experiences, and psychological phenomena are instead a discrete, objective entity residing in the individual. This is associated with presuppositions in the DSM that are not explicitly articulated, such as attributing a mental disorder to the individual rather than the system, resulting in healthcare that is organized around the individual instead of organized around the system ( 54 ).
In this way, DSM classifications can decontextualize mental distress, leading to ‘disorderism’. Disorderism is defined as the systemic decontextualization of mental distress by framing it in terms of individual disorders ( 55 ). The processes by which people are increasingly diagnosed and treated as having distinct treatable individual disorders, exemplified by the overdiagnosis of ADHD in children and adolescents ( 56 ), while at the same time, the services of psychiatry shape more areas of life, has been called the ‘psychiatrization of society’ ( 57 ). The psychiatrization of society encompasses a pervasive influence whereby the reification and disorderism extend beyond clinical settings and infiltrate various facets of daily life. It is a double-edged sword that fosters increased awareness of mental health issues and seeks to reduce stigma, but at the same time, raises concerns about the overemphasis on medical models, potentially neglecting the broader social, cultural, and environmental factors that contribute to individual well-being as well as population salutogenesis ( 58 ).
Instead of using the DSM as a scientific and professional tool in order to classify, the DSM can be reconsidered as a boundary object. When stakeholders with different objectives and needs have to work together constructively without making concessions, like patients and professionals in person-centered mental health care, objects can play a bridging role. Star and Griesemer ( 59 ) introduced the term boundary objects for this purpose.
“Boundary objects are objects that are plastic enough to adapt to the local needs and constraints of the different parties using them, yet robust enough to maintain a common identity in different locations. They are weakly structured in common use and become strongly structured in use in individual locations. They can be abstract or concrete. They have different meanings in different social worlds, but their structure is common enough to more than one world to make them recognizable, a means of translation.” ( 59 ).
Before exploring the benefits of a boundary object perspective for the DSM, it is important to note that it remains questionable whether the DSM in its current form can help establish a shared understanding or provide diagnostic, prognostic, or therapeutic value ( 60 – 63 ). To make the DSM more suitable for accommodating different perspectives and types of knowledge, the DSM task force can focus its redesign on leaving the discrete disease entities — which classifications imply — behind by creating spectra. This way of thinking has already found its way to the DSM-5, in which mental distress as a spectrum was introduced in the areas of autism, substance use, and nearly personality disorders, and following these reconceptualization, also a psychosis spectrum was proposed ( 43 ), but this proposition was eventually not adopted in the manual. As mental distress can be caused by an extensive range of factors and mechanisms that result from interactions in networks of behaviors and patterns that have complex dynamics that unfold over time ( 64 ), spectra of mental distress may be more suitable for conversations about an individual’s narrative and needs in clinical practice, as each experience of mental distress is unique and contextual.
If the DSM is reconsidered as a boundary object that is intended to provide a shared language for interpreting mental distress while addressing the unintended consequences of classifications, it is also essential to consider where this language now primarily manifests itself, how it relates to shared decision-making, and the significant role it plays for patients in the treatment process. In recent decades, the DSM has positioned itself primarily as a professional tool for clinical judgment (see Figure 1 ). In this way, professionals have more or less acquired a monopoly on the language of classifications and the associated behaviors and complaints described in the DSM. It provides professionals with a tool to pursue their professional objectives and legitimacy for their professional steps with patients, resulting in a lack of equality from which different perspectives can be examined side by side. However, with shared decision-making, patients are expected to be engaged and to help determine the course of treatment; the language surrounding classifications and symptoms does not currently allow that to happen sufficiently.
Figure 1 DSM as a professional tool, adapted from Figure 1, ‘Design of a Digital Comic Creator (It’s Me) to Facilitate Social Skills Training for Children With Autism Spectrum Disorder: Design Research Approach’, by Terlouw et al., CC-BY ( 65 ).
This is where boundary objects come into play. The focused shaping of boundary objects can ensure a more equal role for different stakeholders ( 65 – 67 ). Boundary objects can also trigger perspective-making and -taking from a reflective dialogical learning mechanism ( 68 – 70 ), which ensures a better shared understanding of all perspectives. Boundary objects and their dialogical learning mechanisms also align well with co-design ( 71 ). If we consider the DSM a boundary object, it positions itself between the activity system of the professionals, patients, and other people close to the patient ( Figure 2 ). The boundary between activity systems represents not only the cultural differences and potential challenges in actions and interactions but also the significant value in establishing communication and collaboration ( 71 ). All sides can give meaning to the DSM language from their perspective. By effectively considering the DSM as a boundary object, the DSM serves as a conversation piece—a product that elicits and provides room for questions and comments from other people, literally one that encourages conversation ( 72 ). As a conversation piece rather than a determinative classification system, it can contribute to mapping the meaning of complaints, behaviors, signs, and patterns for different invested parties. It also provides space for the patient’s contextual factors, subjective experience, needs, and life events, which are essential to giving constructive meaning to mental distress. This allows for interpretative flexibility; professionals can structure their work, while patients can give meaning to their subjective experience of mental distress.
Figure 2 DSM as a boundary object, adapted from Figure 1, ‘Design of a Digital Comic Creator (It’s Me) to Facilitate Social Skills Training for Children With Autism Spectrum Disorder: Design Research Approach’, by Terlouw et al., CC-BY ( 65 ).
As the DSM as a boundary object enables interpretative flexibility, it could then be used to enact conversations and develop a shared understanding in partnership between the patient and the professional; patients are no longer ‘diagnosed’ with a disorder from a professional point of view. It is important to note that the conceptual history of understanding the diagnostic process as essentially dialogical and not as a merely technical-quantitative procedure was already started in the early 1900s. For example, in the 1913 released ‘General Psychopathology,’ Karl Jaspers presented a phenomenological and comprehensive perspective for psychiatry with suggestions about how to understand the psychopathological phenomena as experienced by the patient through empathic understanding, allowing to understand the patient’s worldview and existential meanings ( 73 ). A century after its first publication, academics continue to leverage Jaspers’ ideas to critique modern operationalist epistemology ( 74 ). Following the notion of the diagnostic process as a dialogical one, the reconsideration of the DSM as a boundary object could accommodate the patient’s idiographic experience and the professional’s knowledge about mental distress by using these potential spectra as conversation pieces, shifting the power balance in clinical practice towards co-creation and dialogue. The spectra can then be explained as umbrella terms that indicate a collection of frequently occurring patterns and signs that can function as a starting point for a co-creative inquiry that promotes dialogue, aligning more with current empirical evidence of lived experience than using classifications as diagnoses.
Considering the advantages and strengths boundary objects bring to a mental health care system centered around shared decision-making and co-creation, the DSM could be a boundary object that is interpreted from various perspectives. Take, for example, altered perceptions, which is a characteristic commonly seen in people who receive a psychosis-related classification in clinical care. For some, these perceptions have person-specific meaning ( 75 , 76 ). By using the DSM as a boundary object and as a conversation piece, the patient and professional can give meaning by using the spectra in the manual as a starting point for a common language instead of using a classification to explain the distress. This requires a phenomenological and idiographic approach considering person-specific meaning and idiosyncrasies. Consequently, diagnostic practices should be iterative to align with the dynamic circumstances, with the individual’s narrative taking center stage in co-creation between professional and patient ( 41 , 49 ), as this reconsidered role fosters the engagement instead of the disengagement of patients. Additionally, the potential role of the DSM as a boundary object and conversation piece may also have a positive effect on societal and scientific levels, specifically on how mental distress is perceived and conceptualized. It can ‘systemically contextualize’ mental distress, which could eliminate the disorderism and the psychiatrization of society, and in the end, hopefully, contribute to population salutogenesis.
If the DSM is reconsidered as a conversation piece in which spectra of mental distress replace classifications, it is important to address that these must be co-designed to accommodate diverse stakeholder perspectives and various types of knowledge side by side in clinical practice. Therefore, developers and designers need to embrace lived experience in the co-development of these spectra of mental distress to ensure patients’ engagement in clinical practice, as the patient effectively becomes a stakeholder of the DSM. This requires a different approach and procedure than DSM task forces used in past iterations. In this section, we will address our second hypothesis: embracing design approaches in redesigning the DSM to a conversation piece that uses spectra of mental distress instead of classifications will stimulate the integration of diverse perspectives and voices in reshaping mental health care. While we focus a little on the what (spectra of mental distress), we mainly focus on the how (the procedure that could be followed to arrive at the what). First, we will discuss the importance of lived experience leadership in design and research. Second, we argue that in the conceptual co-design of DSM spectra, lived experience leadership can be a way forward. Third, we take the stance that a designerly way of thinking and doing can shift the premature overcommitment task forces had to iterative exploration. In the concluding paragraph, we propose a design procedure that embraces engagement and iteration as core values for developing robust and flexible spectra of mental distress that are meaningful for service users and professionals.
First, let us briefly examine the evolution of lived experience in design and science over time to provide context for why engaging people with lived experience in the design of spectra of mental distress is important for innovation. Since 1960, people with lived experiences have tried to let their voices be heard, but initially to no avail, and their civil rights movement of reformist psychiatry was labeled as ‘anti-psychiatry’ ( 77 ). During the turn of the millennium, lived experience received increased recognition and eventually became an important pillar of knowledge that informed practice and continues to do so on various levels of mental health care ( 34 , 36 , 78 – 81 ). While there is currently growing attention to the perspective of lived experience in, for example, mental health research ( 79 , 80 , 82 , 83 ) and mental health care design and innovation ( 84 – 90 ), overall, their involvement remains too low in the majority of research and design projects ( 88 , 91 , 92 ). While there has been a significant increase in the annual publication of articles claiming to employ collaborative methods with people with lived experience, these studies often use vague terms to suggest a higher engagement level than is the case ( 93 ). This has led to initiatives such as that of The Lancet Psychiatry to facilitate transparent reporting of lived experience work ( 93 , 94 ).
Although the involvement of people with lived experience and its reporting needs attention in order to prevent tokenism and co-optation ( 89 ), some great user-driven initiatives resulted in innovative design and research that improved mental health care and exemplifies why their engagement should be mandatory. The Co-Design Living Labs is such an initiative. Its program exemplifies an adaptive and embedded approach for people with lived experiences of mental distress to drive mental health research design to translation ( 95 ). In this community-based approach, people with lived experience, their caregivers, family members, and support networks collaboratively drive research with university researchers, which is very innovative considering the relatively low engagement of people with lived experience in general mental health research. Another example is the development of person-specific tapering medication initiated by people with lived experience of withdrawal symptoms. People with lived experience began to devise practical methods to discontinue medications on their own safely because of the lack of a systematic and professional response to severe and persistent withdrawal. This resulted in the accumulation of experience-based knowledge about withdrawal, ultimately leading to co-creating what is now known as tapering strips ( 81 ). The development of these tapering strips shows that people with lived experience have novel experience-based ideas for design and research that can result in human-centered innovation. Both examples underline the importance of human-centered design in which people with lived experience and knowledge are taken seriously and why the participation era requires that individuals with lived experience are decision-makers from the project’s start to produce novel perspectives for innovative design and research ( 88 , 93 ).
Engaging people with lived experience of mental distress in redesigning the DSM towards a spectrum-based guideline is of special importance, albeit a more conceptual design task in comparison to the earlier examples. What mental distress is remains a fundamental philosophical and ontological question that should be addressed in partnership as it sits at the core of how mental health care is organized. To allow novel ontologies to reach their full potential and act as drivers of a landscape of promising innovative scientific and clinical approaches, investment is required in development and elaboration ( 30 ). This, as well as the epistemic pluralism among theoretical models on mental health problems ( 31 ), makes it evident there is currently not one coherent accepted explanation or consensus on what mental distress is and how it exists. Without clear etiological understanding, the most logical first step should be to involve people with lived experience of mental distress in the redevelopment of the DSM. Accounts from people with lived experience of mental distress are directly relevant to the design of the DSM, as they provide a more comprehensive and accurate understanding of mental distress and its treatment ( 96 ). Moreover, the DSM’s conceptualization as a major determinative classification system could be standing at the core of psychiatry’s “identity crisis”, where checklists of symptoms replaced thoughtful diagnoses despite after decades of brain research, no biomarker has been established for any disorders defined in the DSM ( 10 , 97 ).
Design approaches can help DSM task forces prioritize integrating lived experiences to co-create a framework that can accommodate a range of perspectives to make it viable as a conversation piece. As DSM classifications do not reflect reality ( 98 ), listening to people with firsthand experiences is necessary. The CHIME framework – a conceptual framework of people’s experiences of recovery – shows, for example, a clear need to diagnose not solely based on symptoms but also considering people’s stages in their journey of personal recovery ( 80 ). Further, bottom-up research shows that the lived experience perspective of psychosis can seem very different compared to conventional psychiatric conceptualizations ( 82 ). This is also the case for the lived experience of depression ( 99 ). Design approaches can ensure that such much-needed perspectives and voices are adhered to in developing meaningful innovations ( 88 ), which brings us back to the design of the DSM. Although the DSM aims to conceptualize the reality of mental distress, engaging people with experiences of living with mental distress has never been prioritized by the DSM task force as an important epistemic resource. This is evidenced by the historically low engagement of people with lived experiences and their contexts. For example, although “individuals with mental disorders and families of individuals with mental disorders” participated in providing feedback in the DSM-5 revisions process ( 14 ), when and how they were involved, what feedback they gave, and how this was incorporated are not described. According to the Involvement Matrix ( 100 ) — a matrix that can be used to assess the contribution of patients in research and design —, giving feedback can be classified as ‘listeners’ or ‘co-thinkers,’ which are both low-involvement roles. Moreover, a review of the members of the DSM task forces and working groups listed in the introductions of the DSMs shows patients have never been part of the DSM task force and thus never been part of the decision-making process ( 96 ). Human-centered design is difficult to achieve when people with lived experience are not involved from preparation to implementation but are only asked to give feedback on expert consensus ( 88 ).
In the participation era, using a design approach in mental health care without engaging important stakeholders can be problematic. For example, it is evident that the involvement of people with lived experience changes the nature of an intervention dramatically, as people’s unique first-hand experiences, insights about mental states, and individual meaning and needs are often different in design activities as opposed to what general scientific and web-related resources suggest ( 101 , 102 ). Further, clear differences are reported around designers, researchers, and clinicians on one side and service user ideas of meaningful interventions on the other ( 102 , 103 ). Thus, the meaningful engagement of people with lived experience in design processes always exposes gaps between general research and the interests and lives of service users ( 104 ). This makes the participation of people with lived experience in developing innovative concepts — and, as such, in the conceptual design of DSM spectra of mental distress — essential because their absence in design processes may lead to ineffective outcomes ( 102 ). This design perspective may explain some of the negative effects of the DSM. The classifications aimed to be empirical constructs reflecting reality, yet phenomena such as reification and the classificatory looping effect emerged ( 42 , 51 ). From a design perspective, the emergence of these effects may have a simpler explanation than previously presumed: the premature over-commitment in the DSM’s design processes without input from individuals with firsthand experiences.
The centrist development approach used to design the DSM implicitly frames people with mental distress as ‘ordinary people,’ resulting in ‘average solutions’ because their experiences are decontextualized and lumped together on a group level — eventually leading to general descriptions for a universal appliance. Instead, a more human-centered iterative design process in which people with lived experience play an important role, preferably as decision-makers, can promote the design of spectra of mental distress that leave room for idiosyncrasies that correspond with people’s living environments on an individual level. This can potentially ensure that they are actually helpful for shared decision-making between patients and professionals and resonate in person-centered mental health care. A design approach is feasible for this aim because design processes are not searching for a singular ‘truth’ but rather exploring the multiple ‘truths’ that may be relevant in different contexts ( 105 ). This can be of added value to conceptualizing spectra of mental distress, which is known to have characteristics that overlap between people but also to have a unique phenomenology and contextual foundation for each individual — in the case of mental distress, there literally are multiple truths dependable on who and what you ask in what time and place. Furthermore, design approaches enable exploration and discovery ( 106 ). Designers consistently draw cues from the environment and introduce new variables into the same environment to eventually discover what does and does not work ( 107 ). This explorative attitude also ensures the discovery of unique insights, such as people’s experiential knowledge and contexts. Therefore, from a design perspective, predetermining solutions might be ineffective for arriving at DSM innovation. This is, for example, aptly described by Owens et al. ( 101 ):
“… the iterative nature of the participatory process meant that, although a preliminary programme for the whole workshop series was drawn up at the outset, plans had to be revised in response to the findings from each session. The whole process required flexibility, a constantly open mind and a willingness to embrace the unexpected”.
These insights illustrate the core of design that can guide the development of future DSM iterations: design enables the task force to learn about mental health problems without an omniscient perspective by iteratively developing and testing conceptualizations in the environment in partnership with the target group. As participatory design studies consistently demonstrate, solutions cannot be predetermined solely based on research and resources. The involvement of individuals with lived experience and their contexts invariably uncovers crucial serendipitous insights that challenge the perspectives on the problem. This can expose important misconceptions, such as the tendency to underestimate the complexity of human experience and decontextualize it from its environment.
People with lived experience need to be highly involved in developing meaningful spectra of mental distress to guide conversations in clinical practice. As we now have a comprehensive understanding of what design approaches can offer to the development procedure of a lived experience-informed DSM, we will highlight these insights in this paragraph.
In the design procedure of a future DSM, academic research can be used to learn about people’s experiences of mental distress but never as the source alone for the development of spectra of mental distress. In this way, designers and researchers in mental health care need to involve people with lived experience at the heart of design processes as partners and come to unique insights together without an omniscient perspective. The aim should not be to design general descriptions but to design spectra that are flexible enough to adapt to local needs and constraints for the various parties using them yet robust enough to maintain a common identity across different locations. This allows the DSM to have different meanings in different social worlds, while at the same time, their structure is common enough for more than one world to recognize them.
Conceptualizations of spectra of mental distress must not be predetermined, and there should be no overcommitment to concepts in the early phases of the project. Thus, the task force should avoid viewing mental distress too narrowly, too early on in the process. This enables the evolution of lived experience-based spectra in an iterative design- and test process. The starting point should be an open representation of mental distress and discover together with people with lived experience how this could be best conceptualized and what language should be used. This allows room for exploring and discovering what works and aligns with patients’ needs and experiences in their living environments and professionals’ needs in their work environments.
Researchers and designers should realize that designing and testing conceptualizations in partnership with people with lived experience also results in unique knowledge that can guide the development — designing and testing the developed concepts is a form of research. For example, exploring if a certain designed spectrum resonates as a conversation piece between patients and professionals in clinical practice provides qualitative insights that cannot be predicted beforehand. In this way, science and design can complement the innovation of the DSM: science benefits from a design approach, while design benefits from scientific methods ( 108 ). Flexible navigation between design and science would indicate that the developed DSM can be meaningful as a conversation piece in clinical practice.
Good design comes before effective science, as innovations are useless if not used, even if they are validated by science ( 85 ). Although the development of the DSM is often described as a scientific process, our analysis indicates that it is more accurately described as a design process. As a design process, it requires a methodologically sound design approach that is suitable for involving patients and people with lived experience. Co-design is a great contender for this purpose, as a systematic review showed this approach had the highest level of participant involvement in mental health care innovation ( 89 ). Although people with lived experience have never been involved as decision-makers, this should be the aim of the design process of a novel DSM in the participation era. This promotes lived experience leadership in design and, ultimately, contributes to more effective science.
Involving people with lived experience as decision-makers in redesigning the DSM must avoid tokenism and co-optation and address power imbalances. The first step that the task force can take is to use the Involvement Matrix ( 100 ) together with people with lived experiences to systematically and transparently plan, reflect, and report on everyone’s contribution to the design process. This has not been prioritized in the past DSM revisions. In the end, transparency and honesty about collaboration can support the empowerment of people with firsthand perspectives and shift the power imbalance towards co-creation for more human-centered mental health care. This is needed, as the involvement of people with lived experience in design and research processes is currently too low and obscured by vague terms and bad reporting.
In this hypothesis and theory paper, we have argued that the current role of the DSM, as an operating manual for professionals, can be reconsidered as a boundary object and conversation piece for patients and professionals in clinical practice that stimulates dialogue about mental distress. In this discussion, we will address five themes. First, while we argued that research acknowledges the absence of empirical support for biological causation, we believe characterizing the DSM as entirely non-empirical may be incorrect. Second, we discuss our perspective on balancing between a too-narrow medical perspective and a too-broadly individualized perspective. Third, we discuss why mental health care also needs novel methods for inquiry if the DSM is reconsidered as a conversation piece. Fourth, we discuss that while we are certain that design approaches can be fruitful for redesigning the DSM, some challenges regarding tokenism, co-optation must be addressed. We conclude by examining various methodological challenges and offering recommendations for the co-design process of the DSM.
The DSM is too deeply entrenched in mental health care to discard it simply. The DSM is embedded in not only mental health care but also society. For instance, a DSM classification is necessary in the Netherlands to get mental health care reimbursement, qualify for additional education test time, or receive subsidized assisted living. Moreover, it is ingrained in research and healthcare funding, making it unproductive and somewhat dangerous to discard without an alternative, as it may jeopardize access to care and impact insurance coverage for treatment and services that people with mental distress need. Therefore, we posited that instead of discarding the DSM, its role should be reconsidered in a mental health care system centered around shared decision-making and co-creation to eliminate pervasive effects such as the disengagement of patients, reification, disorderism, and the psychiatrization of society. However, the DSM categories are not entirely a priori constructed as is sometimes claimed, as the psychiatric symptom space and diagnostic categories took shape in the late nineteenth century through decades of observation ( 109 ).
While this adds important nuance to the idea that the design of the DSM is entirely non-empirical, it does not invalidate the argument that the DSM design is grounded in a potentially false ontology ( 64 ). Though the lack of evidence does not necessarily indicate evidence of absence, and the biological context in some way plays a role, research shows various other dimensions of life — including the social, historical, relational, environmental, and more — also influence mental distress, yet are significantly underemphasized in its current design. We believe that we showed this manifests itself most prominently in the various highly arbitrary classification designs that can confuse the professional and the patient and appear limited in providing meaningful guidance for clinical practice, design, and research. That is why we have proposed redesigning the next iteration of the DSM to primarily focus on formulating a set of spectra of distress. Reconsidering the DSM leverages one of its biggest strengths: the DSM is not bound by an analytic procedure but rather is guided by scientific debate ( 17 ). Further, developments and amendments to psychiatric classification systems have always reflected wider social and cultural developments ( 110 ). The recognition, implementation, and impact of the DSM in Western countries can even be seen as a reason not to focus on developing alternative models but rather to redesign the DSM so that it conceptually aligns with the social developments, scientific findings, and needs of people in the 21st century, as it is already deeply embedded in systems. Given that DSM classifications are now recognized as inaccurate depictions of the reality of mental distress ( 98 ) and that, at the same time, mental health care is shifting towards person-centeredness and shared decision-making, we believe the proposals in this article are not radical but rather the most meaningful way forward to accommodate diverse perspectives.
From a classical psychopathological perspective, integrating the lived experiences of those with mental distress into the redevelopment of the DSM as a boundary object presents certain conceptual challenges. For example, uncritically overemphasizing individual experiences might lead to an underappreciation of psychopathological manifestations like, for example, altered perceptions. Conversely, excluding people with lived experience from the DSM’s design processes has resulted in its own conceptual and epistemic issues, such as undervaluing the idiographic, contextual, and phenomenological aspects of individual mental distress. Therefore, we argue that achieving a balance between these differing but crucial perspectives should result from a co-design procedure for a revised DSM. Determining this balance before obtaining results from such a process is too premature and arbitrary and would contradict our recommendation to prevent over-commitment in the early stages of the design process. As people with lived experience were never previously involved, it is impossible to predict the outcomes of a co-design procedure or hypothesize about a clear distinction between these perspectives in the DSM’s conceptual development beforehand. As seen in past iterations, prematurely drawing rigid lines could hinder the design process and result in design fixation. From the perspective of boundary objects, the DSM cannot have one dominant perspective if it is to function effectively. All stakeholders must be able to give meaning to the spectra of mental distress from their own activity systems, and these perspectives should be equal in order to create a shared awareness of the different perspectives involved. A DSM designed as a boundary object triggers dialogical learning mechanisms, ensuring the multiple perspectives are harmonized rather than adjusted to fit one another, ensuring no single perspective prevails over the others or consensus is pursued ( 71 , 111 ).
If the DSM is reconsidered and designed as a conversation piece and classifications are replaced by spectra, in clinical practice, a unique language needs to be co-developed between the patient and the professional, and an equal relationship is important to ally. For example, if we consider the person-specific meanings of altered perceptions, they need to be explored, as they have clinical relevance. However, for such purposes, current diagnostic methods in clinical practice are limiting because they are highly linguistic and tailored to classification systems and the needs and praxis of the professionals. This can impede the DSM’s effectiveness as a tool for dialogue. Expressing the uniqueness of an experience of mental distress is difficult — especially during a mental crisis — let alone effectively communicating it to a professional. While people with mental distress can effectively communicate their behaviors and complaints, which fits the current use of the DSM, people have far more embodied and experiential knowledge of their distress. How people cope with their mental distress in the contexts they are living in is very difficult to put into words without first making these personal and contextual insights tangible ( 41 ), yet this is essential information for when the DSM is used as a boundary object and conversation piece. To accommodate the patient in making this knowledge tangible, the professional becomes more of a facilitator than an expert, emphasizing therapeutic relationships and the healing effects of ritualized care interactions ( 39 ). This transformation requires novel co-creative methods for inquiry ( 41 ) and professional training ( 39 ). Therefore, expanding the diagnostic toolkit with innovative and creative tools and embracing professionals such as art therapists, social workers, and advanced nurse practitioners to enable and support patients to convey their narratives and needs in their own way is essential if the DSM is to be used as a boundary object and conversation piece.
Despite longstanding calls for the APA to include people with lived experience in the decision-making processes for diagnostic criteria, the DSM-5 task force did not accept this inclusion. The task force believed incorporating these perspectives could compromise objectivity in the scientific process ( 96 ). This mindset ensures that research, design, and practice remain predominantly shaped by academics and professionals, causing conventional mental health care to perpetuate itself. It continues to repeat the same approaches and consequently achieves the same results. Therefore, people with lived experience should have more influence in the participation era to accelerate change in mental health care. This proposition comes with some challenges regarding power imbalances that need addressing. While it is acknowledged that the involvement of individuals with lived experience yields unique insights and can serve as strong collaborators and knowledgeable contributors, they are never given decision-making authority in design processes in mental health care ( 88 , 89 , 92 ) or in the DSM’s development processes ( 96 ). This lack of authority impedes lived experience leadership ( 91 , 112 ) and subsequently stands in the way of effectively reconsidering and redesigning the DSM. To avoid tokenism, the DSM revision process should not settle for low engagement and involvement but set the bar higher by redressing power imbalances ( 113 ). Furthermore, in the co-design process of the DSM, the task force should not view objectivity as the opposite of subjectivity or strive for consensus. Instead, they should value group discussions and disagreements, encouraging stakeholders to debate and explore the sources of their differing perspectives and knowledge ( 96 ). Shifting towards lived experience leadership starts with perceiving and engaging people with lived experiences of mental distress as experts of their experiences in iterative design and research processes and giving them this role in revising the DSM.
Merely positioning people with lived experience as partners and decision-makers is insufficient; there are also significant methodological concerns regarding the execution of design research in mental health care. Although iteration and participation are essential for design in mental health care, as designers focus on the unmet needs of service users and ways to improve care ( 114 ), research shows design is not always executed iteratively, and end users are not always involved. For example, about one-third of projects that designed mental health interventions did not adopt an iterative process ( 85 ). The engagement of end users in design processes in mental health is also not yet a common practice. For instance, a systematic review of serious games in mental health for anxiety and depression found that only half of these games, even while reporting using a participatory approach, were designed with input from the intended end-users ( 115 ). A systematic review of design processes that aimed to design innovations for people with psychotic symptoms overlaps these findings, as less than half of the studies demonstrated a high level of participant involvement in their design processes ( 89 ).
The low level of involvement and lack of iterative approaches in mental health care design offer valuable insights for future processes. If the DSM task force aims to adopt a co-design approach, it should incorporate these lessons to enhance design effectiveness. First, the task force must understand that design has a different aim, culture, and methods than the sciences ( 116 ). The scientific approach typically implies investigating the natural world through controlled experiments, classifications, and analysis, emphasizing objectivity, rationality, neutrality, and a commitment to truth. In contrast, a design approach focuses on studying the artificial world, employing methods such as modeling, pattern formation, and synthesis, guided by core values of practicality, ingenuity, empathy, and concern for appropriateness. Second, the task force should consider the known challenges they will encounter and need to navigate to let the paradigms be complementary in practice ( 117 ). Further, the task force should consider that the nature of design is exploratory, iterative, uncertain, and a social form of inquiry and synthesis that is never perfect and never quite finished ( 84 ). This requires tolerating ambiguity and having trust ( 101 ). Lastly, more transparency in the participatory work of the task force is called for, beginning with being honest, being detailed, addressing power imbalances, being participatory in reporting the participatory approach, and being excited and enthusiastic about going beyond tokenistic engagement ( 118 ).
Despite these challenges, transforming psychiatric diagnoses by reconsidering and redesigning the DSM as a boundary object and conversation piece could be a step in the right direction. This would shift the power balance towards shared ownership in a participation era that fosters dialogue instead of diagnosis. We hope this hypothesis and theory paper can give decisive impulses to the much-needed debate on and development of psychiatric diagnoses and, in the end, contribute to lived experience-informed psychiatric epistemology. Furthermore, as a product of an equal co-production process between various disciplines and types of knowledge, this paper shows it is possible to harmonize perspectives on a controversial topic such as the DSM.
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author/s.
LV: Conceptualization, Methodology, Project administration, Writing – original draft, Writing – review & editing. GT: Conceptualization, Methodology, Visualization, Writing – original draft, Writing – review & editing. JVO: Conceptualization, Writing – original draft, Writing – review & editing. SM: Conceptualization, Writing – original draft, Writing – review & editing. JV: Writing – original draft, Writing – review & editing. NB: Writing – original draft, Writing – review & editing.
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. We appreciate the financial support of the FAITH Research Consortium, GGZ-VS University of Applied Science, as well as from the NHL Stenden University of Applied Sciences PhD program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We thank the reviewers for their thorough reading of our manuscript and valuable comments, which improved the quality of our hypothesis and theory paper.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
1. Fischer BA. A review of american psychiatry through its diagnoses. J Nervous Ment Dis . (2012) 200:1022–30. doi: 10.1097/NMD.0b013e318275cf19
CrossRef Full Text | Google Scholar
2. Grob GN. Origins of DSM-I: a study in appearance and reality. Am J Psychiatry . (1991) 148:421–31. doi: 10.1176/ajp.148.4.421
PubMed Abstract | CrossRef Full Text | Google Scholar
3. Brendel DH. Healing psychiatry: A pragmatic approach to bridging the science/humanism divide. Harvard Rev Psychiatry . (2004) 12:150–7. doi: 10.1080/10673220490472409
4. Braslow JT. Therapeutics and the history of psychiatry. Bull History Med . (2000) 74:794–802. doi: 10.1353/bhm.2000.0161
5. Kawa S, Giordano J. A brief historicity of the Diagnostic and Statistical Manual of Mental Disorders: Issues and implications for the future of psychiatric canon and practice. Philosophy Ethics Humanities Med . (2012) 7:2. doi: 10.1186/1747-5341-7-2
6. Mayes R, Horwitz AV. DSM-III and the revolution in the classification of mental illness. J History Behav Sci . (2005) 41:249–67. doi: 10.1002/(ISSN)1520-6696
7. American Psychiatric Association. Diagnostic & Statistical manual of Mental Disorders . 3rd edition. Washington, D.C: American Psychiatric Association (1980).
Google Scholar
8. Gambardella A. Science and innovation: the US pharmaceutical industry during the 1980s . New York: Cambridge University Press (1995). doi: 10.1017/CBO9780511522031
9. Klerman GL, Vaillant GE, Spitzer RL, Michels R. A debate on DSM-III. Am J Psychiatry . (1984) 141:539–53. doi: 10.1176/ajp.141.4.539
10. Scull A. American psychiatry in the new millennium: a critical appraisal. psychol Med . (2021) 51:1–9. doi: 10.1017/S0033291721001975
11. American Psychiatric Association. Diagnostic and statistical manual of mental disorders . 4th edition. Washington, D.C: American Psychiatric Association (1994).
12. Shaffer D. A participant’s observations: preparing DSM-IV. Can J Psychiatry . (1996) 41:325–9. doi: 10.1177/070674379604100602
13. Follette WC, Houts AC. Models of scientific progress and the role of theory in taxonomy development: A case study of the DSM. J Consulting Clin Psychol . (1996) 64:1120–32. doi: 10.1037//0022-006X.64.6.1120
14. American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-5-TR . 5th ed. Washington, Dc: American Psychiatric Association Publishing (2022). doi: 10.1176/appi.books.9780890425787
15. Pickersgill MD. Debating DSM-5: diagnosis and the sociology of critique. J Med Ethics . (2013) 40:521–5. doi: 10.1136/medethics-2013-101762
16. Hyman SE. The diagnosis of mental disorders: the problem of reification. Annu Rev Clin Psychol . (2010) 6:155–79. doi: 10.1146/annurev.clinpsy.3.022806.091532
17. Zachar P, Kendler KS. Psychiatric disorders: A conceptual taxonomy. Am J Psychiatry . (2007) 164:557–65. doi: 10.1176/ajp.2007.164.4.557
18. Francis A, First MB, Pincus HA. DSM-IV Guidebook . Washington DC: American Psychiatric Association (1995).
19. Spitzer RL, Williams JBW, Endicott J. Standards for DSM-5 reliability. Am J Psychiatry . (2012) 169:537–7. doi: 10.1176/appi.ajp.2012.12010083
20. Gómez-Carrillo A, Kirmayer LJ, Aggarwal NK, Bhui KS, Fung KP-L, Kohrt BA, et al. Integrating neuroscience in psychiatry: a cultural–ecosocial systemic approach. Lancet Psychiatry . (2023) 10:296–304. doi: 10.1016/s2215-0366(23)00006-8
21. Morgan C, Charalambides M, Hutchinson G, Murray RM. Migration, ethnicity, and psychosis: toward a sociodevelopmental model. Schizophr Bull . (2010) 36:655–64. doi: 10.1093/schbul/sbq051
22. Howes OD, Murray RM. Schizophrenia: an integrated sociodevelopmental-cognitive model. Lancet . (2014) 383:1677–87. doi: 10.1016/S0140-6736(13)62036-X
23. Alegría M, NeMoyer A, Falgàs Bagué I, Wang Y, Alvarez K. Social determinants of mental health: where we are and where we need to go. Curr Psychiatry Rep . (2018) 20. doi: 10.1007/s11920-018-0969-9
24. Jeste DV, Pender VB. Social determinants of mental health. JAMA Psychiatry . (2022) 79:283–4. doi: 10.1001/jamapsychiatry.2021.4385
25. Huggard L, Murphy R, O’Connor C, Nearchou F. The social determinants of mental illness: A rapid review of systematic reviews. Issues Ment Health Nurs . (2023) 44:1–11. doi: 10.1080/01612840.2023.2186124
26. Kirkbride JB, Anglin DM, Colman I, Dykxhoorn J, Jones PB, Patalay P, et al. The social determinants of mental health and disorder: evidence, prevention and recommendations. World psychiatry: Off J World Psychiatr Assoc (WPA) . (2024) 23:58–90. doi: 10.1002/wps.21160
27. Alon N, Macrynikola N, Jester DJ, Keshavan M, Reynolds ICF, Saxena S, et al. Social determinants of mental health in major depressive disorder: umbrella review of 26 meta-analyses and systematic reviews. Psychiatry Res . (2024) 335:115854. doi: 10.1016/j.psychres.2024.115854
28. Read J, Bentall RP. Negative childhood experiences and mental health: theoretical, clinical and primary prevention implications. Br J Psychiatry . (2012) 200:89–91. doi: 10.1192/bjp.bp.111.096727
29. van Os J, Kenis G, Rutten BPF. The environment and schizophrenia. Nature . (2010) 468:203–12. doi: 10.1038/nature09563
30. Köhne A, de Graauw LP, Leenhouts-van der Maas R, van Os J. Clinician and patient perspectives on the ontology of mental disorder: a qualitative study. Front Psychiatry . (2023) 14:1081925. doi: 10.3389/fpsyt.2023.1081925
31. Richter D, Dixon J. Models of mental health problems: a quasi-systematic review of theoretical approaches. J Ment Health . (2022) 32:1–11. doi: 10.1080/09638237.2021.2022638
32. World Health Organization. Guidance on community mental health services: Promoting person-centred and rights-based approaches (2021). Available online at: https://www.who.int/publications/i/item/9789240025707 .
33. Huber M, Knottnerus JA, Green L, Horst HVD, Jadad AR, Kromhout D, et al. How should we define health? BMJ . (2011) 343:d4163–3. doi: 10.1136/bmj.d4163
34. Davidson L. The recovery movement: implications for mental health care and enabling people to participate fully in life. Health Affairs . (2016) 35:1091–7. doi: 10.1377/hlthaff.2016.0153
35. Dixon LB, Holoshitz Y, Nossel I. Treatment engagement of individuals experiencing mental illness: review and update. World Psychiatry . (2016) 15:13–20. doi: 10.1002/wps.20306
36. Karbouniaris S. Let’s tango! Integrating professionals’ lived experience in the tranformation of mental health services. (2023). Available online at: https://hdl.handle.net/1887/3640655 .
PubMed Abstract | Google Scholar
37. Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared decision making in the NHS. BMJ . (2010) 341:c5146–6. doi: 10.1136/bmj.c5146
38. Oueslati R, Woudstra AJ, Alkirawan R, Reis R, Zaalen Yv, Slager MT, et al. What value structure underlies shared decision making? A qualitative synthesis of models of shared decision making. Patient Educ Couns . (2024) 124:108284. doi: 10.1016/j.pec.2024.108284
39. van Os J, Guloksuz S, Vijn TW, Hafkenscheid A, Delespaul P. The evidence-based group-level symptom-reduction model as the organizing principle for mental health care: time for change? World Psychiatry . (2019) 18:88–96. doi: 10.1002/wps.20609
40. Meerman S. ADHD and the power of generalization: exploring the faces of reification. researchrugnl . (2019). doi: 10.33612/diss.84379221
41. Veldmeijer L, Terlouw G, van ‘t VJ, van Os J, Boonstra N. Design for mental health: can design promote human-centred diagnostics? Design Health . (2023) 7:1–19. doi: 10.1080/24735132.2023.2171223
42. te Meerman S, Freedman JE, Batstra L. ADHD and reification: Four ways a psychiatric construct is portrayed as a disease. Front Psychiatry . (2022) 13:1055328. doi: 10.3389/fpsyt.2022.1055328
43. Guloksuz S, van Os J. The slow death of the concept of schizophrenia and the painful birth of the psychosis spectrum. psychol Med . (2017) 48:229–44. doi: 10.1017/S0033291717001775
44. Os Jv, Scheepers F, Milo M, Ockeloen G, Guloksuz S, Delespaul P. It has to be better, otherwise we will get stuck.” A Review of Novel Directions for Mental Health Reform and Introducing Pilot Work in the Netherlands. Clin Pract Epidemiol Ment Health . (2023) 19. doi: 10.2174/0117450179271206231114064736
45. Bowker GC. Susan Leigh Star. Sorting Things Out: Classification and Its Consequences (Inside technology) . Cambridge, MA: Mit Press (1999). doi: 10.7551/mitpress/6352.001.0001
46. Perkins A, Ridler J, Browes D, Peryer G, Notley C, Hackmann C. Experiencing mental health diagnosis: a systematic review of service user, clinician, and carer perspectives across clinical settings. Lancet Psychiatry . (2018) 5:747–64. doi: 10.1016/S2215-0366(18)30095-6
47. Ben-Zeev D, Young MA, Corrigan PW. DSM-V and the stigma of mental illness. J Ment Health . (2010) 19:318–27. doi: 10.3109/09638237.2010.492484
48. Thornicroft G. Shunned: discrimination against people with mental illness . Oxford: Oxford University Press (2009).
49. Köhne ACJ. The ontological status of a psychiatric diagnosis: the case of neurasthenia. Philosophy Psychiatry Psychol . (2019) 26:E–1-E-11. doi: 10.1353/ppp.2019.0008
50. Hacking I. Kinds of people: moving targets. Proceedings of the British Academy . Oxford: Oxford University Press Inc (2007). doi: 10.5871/bacad/9780197264249.003.0010
51. Hacking I. The social construction of what? Harvard: Harvard University Press (1999).
52. Wienen AW, Sluiter MN, Thoutenhoofd E, de Jonge P, Batstra L. The advantages of an ADHD classification from the perspective of teachers. Eur J Special Needs Educ . (2019) 34:1–14. doi: 10.1080/08856257.2019.1580838
53. Franz DJ, Richter T, Lenhard W, Marx P, Stein R, Ratz C. The influence of diagnostic labels on the evaluation of students: a multilevel meta-analysis. Educ Psychol Rev . (2023) 35. doi: 10.1007/s10648-023-09716-6
54. Köhne ACJ. In search of a better ontology of mental disorder. (2022). doi: 10.33540/1591
55. de Ridder B, van Hulst BM. Disorderism: what it is and why it’s a problem. Tijdschr Psychiatr . (2023) 65:163–6.
56. Kazda L, Bell K, Thomas R, McGeechan K, Sims R, Barratt A. Overdiagnosis of Attention-Deficit/Hyperactivity Disorder in children and adolescents. JAMA Network Open . (2021) 4. doi: 10.1001/jamanetworkopen.2021.5335
57. Beeker T, Mills C, Bhugra D, te Meerman S, Thoma S, Heinze M, et al. Psychiatrization of society: A conceptual framework and call for transdisciplinary research. Front Psychiatry . (2021) 12:645556. doi: 10.3389/fpsyt.2021.645556
58. Os Jv, Guloksuz S. Population salutogenesis—The future of psychiatry? JAMA Psychiatry . (2024) 81:115–5. doi: 10.1001/jamapsychiatry.2023.4582
59. Star SL, Griesemer JR. Institutional ecology, `Translations’ and boundary objects: amateurs and professionals in berkeley’s museum of vertebrate zoology, 1907-39. Soc Stud Sci . (1989) 19:387–420. doi: 10.1177/030631289019003001
60. Marsman A, Pries L-K, ten Have M, de Graaf R, van Dorsselaer S, Bak M, et al. Do current measures of polygenic risk for mental disorders contribute to population variance in mental health? Schizophr Bull . (2020). doi: 10.1093/schbul/sbaa086
61. van Os J. Personalized psychiatry: Geen vervanger van persoonlijke psychiatrie. Tijdschrift voor Psychiatr . (2018) 60:199–204.
62. Hyman SE. Psychiatric disorders: grounded in human biology but not natural kinds. Perspect Biol Med . (2021) 64:6–28. doi: 10.1353/pbm.2021.0002
63. Schleim S. Why mental disorders are brain disorders. And why they are not: ADHD and the challenges of heterogeneity and reification. Front Psychiatry . (2022) 13:943049. doi: 10.3389/fpsyt.2022.943049
64. Köhne ACJ. The relationalist turn in understanding mental disorders: from essentialism to embracing dynamic and complex relations. Philosophy Psychiatry Psychol . (2020) 27:119–40. doi: 10.1353/ppp.2020.0020
65. Terlouw G, Veer JTv’t, Prins JT, Kuipers DA, Pierie J-PEN. Design of a digital comic creator (It’s me) to facilitate social skills training for children with autism spectrum disorder: design research approach. JMIR Ment Health . (2020) 7:e17260. doi: 10.2196/17260
66. Terlouw G, Kuipers D, van ’t Veer J, Prins JT, Pierie JPEN. The development of an escape room–based serious game to trigger social interaction and communication between high-functioning children with autism and their peers: iterative design approach. JMIR Serious Games . (2021) 9:e19765. doi: 10.2196/19765
67. Kuipers DA, Terlouw G, Wartena BO, Prins JT, Pierie JPEN. Maximizing authentic learning and real-world problem-solving in health curricula through psychological fidelity in a game-like intervention: development, feasibility, and pilot studies. Med Sci Educator . (2018) 29:205–14. doi: 10.1007/s40670-018-00670-5
68. Kajamaa A. Boundary breaking in a hospital. Learn Organ . (2011) 18:361–77. doi: 10.1108/09696471111151710
69. Sajtos L, Kleinaltenkamp M, Harrison J. Boundary objects for institutional work across service ecosystems. J Service Manage . (2018) 29:615–40. doi: 10.1108/JOSM-01-2017-0011
70. Jensen S, Kushniruk A. Boundary objects in clinical simulation and design of eHealth. Health Inf J . (2014) 22:248–64. doi: 10.1177/1460458214551846
71. Terlouw G, Kuipers D, Veldmeijer L, van ’t Veer J, Prins J, Pierie J-P. Boundary objects as dialogical learning accelerators for social change in design for health: systematic review. JMIR Hum Factors . (2021). doi: 10.2196/31167
72. Wiener HJD. Conversation pieces: the role of products in facilitating conversation. Dukespace . (2017). Available online at: https://hdl.handle.net/10161/14430 .
73. Jaspers K. General psychopathology . Baltimore, Md: Johns Hopkins University Press (1998).
74. Parnas J, Sass LA, Zahavi D. Rediscovering psychopathology: the epistemology and phenomenology of the psychiatric object. Schizophr Bull . (2012) 39:270–7. doi: 10.1093/schbul/sbs153
75. Feyaerts J, Henriksen MG, Vanheule S, Myin-Germeys I, Sass LA. Delusions beyond beliefs: a critical overview of diagnostic, aetiological, and therapeutic schizophrenia research from a clinical-phenomenological perspective. Lancet Psychiatry . (2021) 8:237–49. doi: 10.1016/S2215-0366(20)30460-0
76. Ritunnano R, Kleinman J, Whyte Oshodi D, Michail M, Nelson B, Humpston CS, et al. Subjective experience and meaning of delusions in psychosis: a systematic review and qualitative evidence synthesis. Lancet Psychiatry . (2022) 9:458–76. doi: 10.1016/S2215-0366(22)00104-3
77. Köhne ACJ, van Os J. Precision psychiatry: promise for the future or rehash of a fossilised foundation? psychol Med . (2021) 51:1409–11. doi: 10.1017/S0033291721000271
78. Boevink WA. From being a disorder to dealing with life: an experiential exploration of the association between trauma and psychosis. Schizophr Bull . (2005) 32:17–9. doi: 10.1093/schbul/sbi068
79. Slade M, Longden E. Empirical evidence about recovery and mental health. BMC Psychiatry . (2015) 15. doi: 10.1186/s12888-015-0678-4
80. Leamy M, Bird V, Boutillier CL, Williams J, Slade M. Conceptual framework for personal recovery in mental health: systematic review and narrative synthesis. Br J Psychiatry . (2011) 199:445–52. doi: 10.1192/bjp.bp.110.083733
81. Groot PC, van Os J. How user knowledge of psychotropic drug withdrawal resulted in the development of person-specific tapering medication. Ther Adv Psychopharmacol . (2020) 10:204512532093245. doi: 10.1177/2045125320932452
82. Fusar-Poli P, Estradé A, Stanghellini G, Venables J, Onwumere J, Messas G, et al. The lived experience of psychosis: A bottom-up review co-written by experts by experience and academics. World Psychiatry . (2022) 21:168–88. doi: 10.1002/wps.20959
83. Jakobsson C, Genovesi E, Afolayan A, Bella-Awusah T, Omobowale O, Buyanga M, et al. Co-producing research on psychosis: a scoping review on barriers, facilitators and outcomes. Int J Ment Health Syst . (2023) 17. doi: 10.1186/s13033-023-00594-7
84. Orlowski S, Matthews B, Bidargaddi N, Jones G, Lawn S, Venning A, et al. Mental health technologies: designing with consumers. JMIR Hum Factors . (2016) 3. doi: 10.2196/humanfactors.4336
85. Vial S, Boudhraâ S, Dumont M. Human-centered design approaches in digital mental health interventions: exploratory mapping review. JMIR Ment Health . (2021) 9. doi: 10.2196/35591
86. Tindall RM, Ferris M, Townsend M, Boschert G, Moylan S. A first-hand experience of co-design in mental health service design: Opportunities, challenges, and lessons. Int J Ment Health Nurs . (2021) 30. doi: 10.1111/inm.12925
87. Schouten SE, Kip H, Dekkers T, Deenik J, Beerlage-de Jong N, Ludden GDS, et al. Best-practices for co-design processes involving people with severe mental illness for eMental health interventions: a qualitative multi-method approach. Design Health . (2022) 6:316–44. doi: 10.1080/24735132.2022.2145814
88. Veldmeijer L, Terlouw G, van Os J, van Dijk O, van ’t Veer J, Boonstra N. The involvement of service users and people with lived experience in mental health care innovation through design: systematic review. JMIR Ment Health . (2023) 10:e46590. doi: 10.2196/46590
89. Veldmeijer L, Terlouw G, van Os J, van ’t Veer J, Boonstra N. The frequency of design studies targeting people with psychotic symptoms and features in mental health care innovation: A research letter of a secondary data analysis. JMIR Ment Health . (2023) 11. doi: 10.2196/54202
90. Hawke LD, Sheikhan NY, Bastidas-Bilbao H, Rodak T. Experience-based co-design of mental health services and interventions: A scoping review. SSM Ment Health . (2024) 5:100309. doi: 10.1016/j.ssmmh.2024.100309
91. Scholz B, Gordon S, Happell B. Consumers in mental health service leadership: A systematic review. Int J Ment Health Nurs . (2016) 26:20–31. doi: 10.1111/inm.12266
92. Brotherdale R, Berry K, Branitsky A, Bucci S. Co-producing digital mental health interventions: A systematic review. Digital Health . (2024) 10. doi: 10.1177/20552076241239172
93. Scholz B. Mindfully reporting lived experience work. Lancet Psychiatry . (2024) 11:168–8. doi: 10.1016/S2215-0366(24)00007-5
94. Davis S, Pinfold V, Catchpole J, Lovelock C, Senthi B, Kenny A. Reporting lived experience work. Lancet Psychiatry . (2024) 11:8–9. doi: 10.1016/S2215-0366(23)00402-9
95. Palmer VJ, Bibb J, Lewis M, Densley K, Kritharidis R, Dettmann E, et al. A co-design living labs philosophy of practice for end-to-end research design to translation with people with lived-experience of mental ill-health and carer/family and kinship groups. Front Public Health . (2023) 11:1206620. doi: 10.3389/fpubh.2023.1206620
96. Tekin Ş. Participatory interactive objectivity in psychiatry. Philosophy Sci . (2022), 1–20. doi: 10.1017/psa.2022.47
97. Gardner C, Kleinman A. Medicine and the mind — The consequences of psychiatry’s identity crisis. New Engl J Med . (2019) 381:1697–9. doi: 10.1056/NEJMp1910603
98. Kendler KS. Potential lessons for DSM from contemporary philosophy of science. JAMA Psychiatry . (2022) 79:99. doi: 10.1001/jamapsychiatry.2021.3559
99. Fusar-Poli P, Estradé Andrés, Stanghellini G, Esposito CM, Rosfort René, Mancini M, et al. The lived experience of depression: a bottom-up review co-written by experts by experience and academics. World Psychiatry . (2023) 22:352–65. doi: 10.1002/wps.21111
100. Smits D-W, van Meeteren K, Klem M, Alsem M, Ketelaar M. Designing a tool to support patient and public involvement in research projects: the Involvement Matrix. Res Involvement Engagement . (2020) 6. doi: 10.1186/s40900-020-00188-4
101. Owens C, Farrand P, Darvill R, Emmens T, Hewis E, Aitken P. Involving service users in intervention design: a participatory approach to developing a text-messaging intervention to reduce repetition of self-harm. Health Expectations . (2010) 14:285–95. doi: 10.1111/hex.2011.14.issue-3
102. Nakarada-Kordic I, Hayes N, Reay SD, Corbet C, Chan A. Co-designing for mental health: creative methods to engage young people experiencing psychosis. Design Health . (2017) 1:229–44. doi: 10.1080/24735132.2017.1386954
103. Orlowski SK, Lawn S, Venning A, Winsall M, Jones GM, Wyld K, et al. Participatory research as one piece of the puzzle: A systematic review of consumer involvement in design of technology-based youth mental health and well-being interventions. JMIR Hum Factors . (2015) 2:e12. doi: 10.2196/humanfactors.4361
104. Gammon D, Strand M, Eng LS. Service users’ perspectives in the design of an online tool for assisted self-help in mental health: a case study of implications. Int J Ment Health Syst . (2014) 8. doi: 10.1186/1752-4458-8-2
105. Wendt T. Design for Dasein: understanding the design of experiences . San Bernardino, California: Thomas Wendt (2015).
106. Bateson G. Steps to an ecology of mind . Chicago: University Of Chicago Press (1972).
107. Schön DA. The Reflective Practitioner: How Professionals Think in Action . Aldershot, U.K: Ashgate, Cop (1991).
108. Verkerke GJ, van der Houwen EB, Broekhuis AA, Bursa J, Catapano G, McCullagh P, et al. Science versus design; comparable, contrastive or conducive? J Mechanical Behav Biomed Materials . (2013) 21:195–201. doi: 10.1016/j.jmbbm.2013.01.009
109. Zachar P. Psychopathology beyond psychiatric symptomatology. Philosophy Psychiatry Psychol . (2020) 27:141–3. doi: 10.1353/ppp.2020.0021
110. Foucault M. Madness and Civilization: A History of Insanity in the Age of Reason . London: Routledge (1961). doi: 10.4324/9780203278796
111. Star SL. This is not a boundary object: reflections on the origin of a concept. Science Technology Hum Values . (2010) 35:601–17. doi: 10.1177/0162243910377624
112. Jones N. Lived experience leadership in peer support research as the new normal. Psychiatr Serv . (2022) 73:125. doi: 10.1176/appi.ps.73201
113. Scholz B. We have to set the bar higher: towards consumer leadership, beyond engagement or involvement. Aust Health Rev . (2022) 46. doi: 10.1071/AH22022
114. Rivard L, Lehoux P, Hagemeister N. Articulating care and responsibility in design: A study on the reasoning processes guiding health innovators’ “care-making” practices. Design Stud . (2021) 72:100986. doi: 10.1016/j.destud.2020.100986
115. Dekker MR, Williams AD. The use of user-centered participatory design in serious games for anxiety and depression. Games Health J . (2017) 6:327–33. doi: 10.1089/g4h.2017.0058
116. Cross N. Designerly ways of knowing . London: Springer (2006).
117. Groeneveld B, Dekkers T, Boon B, D’Olivo P. Challenges for design researchers in healthcare. Design Health . (2018) 2:305–26. doi: 10.1080/24735132.2018.1541699
118. Scholz B, Stewart S, Pamoso A, Gordon S, Happell B, Utomo B. The importance of going beyond consumer or patient involvement to lived experience leadership. Int J Ment Health Nurs . (2023) 33:1–4. doi: 10.1111/inm.13282
Keywords: psychiatry, diagnosis, design, innovation, mental health care
Citation: Veldmeijer L, Terlouw G, van Os J, te Meerman S, van ‘t Veer J and Boonstra N (2024) From diagnosis to dialogue – reconsidering the DSM as a conversation piece in mental health care: a hypothesis and theory. Front. Psychiatry 15:1426475. doi: 10.3389/fpsyt.2024.1426475
Received: 01 May 2024; Accepted: 22 July 2024; Published: 06 August 2024.
Reviewed by:
Copyright © 2024 Veldmeijer, Terlouw, van Os, te Meerman, van ‘t Veer and Boonstra. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Lars Veldmeijer, [email protected]
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
IMAGES
COMMENTS
Hypothesis testing example. You want to test whether there is a relationship between gender and height. Based on your knowledge of human physiology, you formulate a hypothesis that men are, on average, taller than women. ... Definition and Examples The p-value shows the likelihood of your data occurring under the null hypothesis. P-values help ...
Hypothesis testing is an act in statistics whereby an analyst tests an assumption regarding a population parameter. The methodology employed by the analyst depends on the nature of the data used ...
A statistical hypothesis test is a method of statistical inference used to decide whether the data sufficiently support a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p ...
Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant. It involves the setting up of a null hypothesis and an alternate hypothesis. There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
Hypothesis testing involves five key steps, each critical to validating a research hypothesis using statistical methods: Formulate the Hypotheses: Write your research hypotheses as a null hypothesis (H 0) and an alternative hypothesis (H A ). Data Collection: Gather data specifically aimed at testing the hypothesis.
A hypothesis test consists of five steps: 1. State the hypotheses. State the null and alternative hypotheses. These two hypotheses need to be mutually exclusive, so if one is true then the other must be false. 2. Determine a significance level to use for the hypothesis. Decide on a significance level.
Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables. This post provides an overview of statistical hypothesis testing.
In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis.The null hypothesis is usually denoted \(H_0\) while the alternative hypothesis is usually denoted \(H_1\). An hypothesis test is a statistical decision; the conclusion will either be to reject the null hypothesis in favor ...
A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators. In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis true for the entire population.
Test Statistic: z = ¯ x − μo σ / √n since it is calculated as part of the testing of the hypothesis. Definition 7.1.4. p - value: probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true.
HYPOTHESIS TESTING. A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the "alternate" hypothesis, and the opposite ...
What is Hypothesis Testing? Hypothesis testing is an assessment method that allows researchers to determine the plausibility of a hypothesis. It involves testing an assumption about a specific population parameter to know whether it's true or false. These population parameters include variance, standard deviation, and median.
Statistics - Hypothesis Testing, Sampling, Analysis: Hypothesis testing is a form of statistical inference that uses data from a sample to draw conclusions about a population parameter or a population probability distribution. First, a tentative assumption is made about the parameter or distribution. This assumption is called the null hypothesis and is denoted by H0.
First, the technical definition of power is 1−β. It represents that given an alternative hypothesis and given our null, sample size, and decision rule (alpha = 0.05), the probability is that we accept this particular hypothesis. We visualize the yellow area below. Second, power is really intuitive in its definition.
hypothesis testing, In statistics, a method for testing how accurately a mathematical model based on one set of data predicts the nature of other data sets generated by the same process. Hypothesis testing grew out of quality control, in which whole batches of manufactured items are accepted or rejected based on testing relatively small samples.An initial hypothesis (null hypothesis) might ...
Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence.
This tests whether the population parameter is equal to, versus less than, some specific value. Ho: μ = 12 vs. H1: μ < 12. The critical region is in the left tail and the critical value is a negative value that defines the rejection zone. Figure 3.1.3 3.1. 3: The rejection zone for a left-sided hypothesis test.
Significance tests give us a formal process for using sample data to evaluate the likelihood of some claim about a population value. Learn how to conduct significance tests and calculate p-values to see how likely a sample result is to occur by random chance. You'll also see how we use p-values to make conclusions about hypotheses.
The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data. The supposed truths and assumptions are based on the current data and a set of 2 hypotheses are formulated.
Hypothesis Testing Steps. There are 5 main hypothesis testing steps, which will be outlined in this section. The steps are: Determine the null hypothesis: In this step, the statistician should ...
Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.
3. One-Sided vs. Two-Sided Testing. When it's time to test your hypothesis, it's important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests, or one-tailed and two-tailed tests, respectively. Typically, you'd leverage a one-sided test when you have a strong conviction ...
Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing.
In order to test the hypothesis, experimental data from China and OECD countries are used. ... According to the definition of PHP, what causes the movement of FDI between two countries is the ...
The definition of mental disorder in the DSM-5 was thereby conceptualized as: "… a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." ( 14 ).