Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Survey Research | Definition, Examples & Methods

Survey Research | Definition, Examples & Methods

Published on August 20, 2019 by Shona McCombes . Revised on June 22, 2023.

Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyze the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyze the survey results, step 6: write up the survey results, other interesting articles, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research : investigating the experiences and characteristics of different social groups
  • Market research : finding out what customers think about products, services, and companies
  • Health research : collecting data from patients about symptoms and treatments
  • Politics : measuring public opinion about parties and policies
  • Psychology : researching personality traits, preferences and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and in longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism. Run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • US college students
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18-24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalized to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

Several common research biases can arise if your survey is not generalizable, particularly sampling bias and selection bias . The presence of these biases have serious repercussions for the validity of your results.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every college student in the US. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalize to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions. Again, beware of various types of sampling bias as you design your sample, particularly self-selection bias , nonresponse bias , undercoverage bias , and survivorship bias .

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by mail, online or in person, and respondents fill it out themselves.
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses.

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g. residents of a specific region).
  • The response rate is often low, and at risk for biases like self-selection bias .

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyze.
  • The anonymity and accessibility of online surveys mean you have less control over who responds, which can lead to biases like self-selection bias .

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping mall or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g. the opinions of a store’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations and is at risk for sampling bias .

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data: the researcher records each response as a category or rating and statistically analyzes the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analyzed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g. yes/no or agree/disagree )
  • A scale (e.g. a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g. age categories)
  • A list of options with multiple answers possible (e.g. leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analyzed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an “other” field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic. Avoid jargon or industry-specific terminology.

Survey questions are at risk for biases like social desirability bias , the Hawthorne effect , or demand characteristics . It’s critical to use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no indication that you’d prefer a particular answer or emotion.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by mail, online, or in person.

There are many methods of analyzing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also clean the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organizing them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analyzing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analyzed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyze it. In the results section, you summarize the key results from your analysis.

In the discussion and conclusion , you give your explanations and interpretations of these results, answer your research question, and reflect on the implications and limitations of the research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Survey Research | Definition, Examples & Methods. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/methodology/survey-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, questionnaire design | methods, question types & examples, what is a likert scale | guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Triangulation

Triangulation in Research – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Experimental Research Design

Experimental Design – Types, Methods, Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design of a survey

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

age gating

Age Gating: Effective Strategies for Online Content Control

Aug 23, 2024

research design of a survey

Customer Experience Lessons from 13,000 Feet — Tuesday CX Thoughts

Aug 20, 2024

insight

Insight: Definition & meaning, types and examples

Aug 19, 2024

employee loyalty

Employee Loyalty: Strategies for Long-Term Business Success 

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 21 August 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.38(48); 2023 Dec 11
  • PMC10713437

Logo of jkms

Designing, Conducting, and Reporting Survey Studies: A Primer for Researchers

Olena zimba.

1 Department of Clinical Rheumatology and Immunology, University Hospital in Krakow, Krakow, Poland.

2 National Institute of Geriatrics, Rheumatology and Rehabilitation, Warsaw, Poland.

3 Department of Internal Medicine N2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Armen Yuri Gasparyan

4 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, UK.

Survey studies have become instrumental in contributing to the evidence accumulation in rapidly developing medical disciplines such as medical education, public health, and nursing. The global medical community has seen an upsurge of surveys covering the experience and perceptions of health specialists, patients, and public representatives in the peri-pandemic coronavirus disease 2019 period. Currently, surveys can play a central role in increasing research activities in non-mainstream science countries where limited research funding and other barriers hinder science growth. Planning surveys starts with overviewing related reviews and other publications which may help to design questionnaires with comprehensive coverage of all related points. The validity and reliability of questionnaires rely on input from experts and potential responders who may suggest pertinent revisions to prepare forms with attractive designs, easily understandable questions, and correctly ordered points that appeal to target respondents. Currently available numerous online platforms such as Google Forms and Survey Monkey enable moderating online surveys and collecting responses from a large number of responders. Online surveys benefit from disseminating questionnaires via social media and other online platforms which facilitate the survey internationalization and participation of large groups of responders. Survey reporting can be arranged in line with related recommendations and reporting standards all of which have their strengths and limitations. The current article overviews available recommendations and presents pointers on designing, conducting, and reporting surveys.

INTRODUCTION

Surveys are increasingly popular research studies that are aimed at collecting and analyzing opinions of diverse subject groups at certain periods. Initially and predominantly employed for applied social science research, 1 surveys have maintained their social dimension and transformed into indispensable tools for analyzing knowledge, perceptions, prevalence of clinical conditions, and practices in the medical sciences. 2 In rapidly developing disciplines with social dimensions such as medical education, public health, and nursing, online surveys have become essential for monitoring and auditing healthcare and education services 3 , 4 and generating new hypotheses and research questions. 5 In non-mainstream science countries with uninterrupted Internet access, online surveys have also been praised as useful studies for increasing research activities. 6

In 2016, the Medical Subject Headings (MeSH) vocabulary of the US National Library of Medicine introduced "surveys and questionnaires" as a structured keyword, defining survey studies as "collections of data obtained from voluntary subjects" ( https://www.ncbi.nlm.nih.gov/mesh/?term=surveys+and+questionnaires ). Such studies are instrumental in the absence of evidence from randomized controlled trials, systematic reviews, and cohort studies. Tagging survey reports with this MeSH term is advisable for increasing the retrieval of relevant documents while searching through Medline, Scopus, and other global databases.

Surveys are relatively easy to conduct by distributing web-based and non-web-based questionnaires to large groups of potential responders. The ease of conduct primarily depends on the way of approaching potential respondents. Face-to-face interviews, regular postmails, e-mails, phone calls, and social media posts can be employed to reach numerous potential respondents. Digitization and social media popularization have improved the distribution of questionnaires, expanded respondents' engagement, facilitated swift data processing, and globalization of survey studies. 7

SURVEY REPORTING GUIDANCE

Despite the ease of survey studies and their importance for maintaining research activities across academic disciplines, their methodological quality, reproducibility, and implications vary widely. The deficiencies in designing and reporting are the main reason for the inefficiency of some surveys. For instance, systematic analyses of survey methodologies in nephrology, transfusion medicine, and radiology have indicated that less than one-third of related reports provide valid and reliable data. 8 , 9 , 10 Additionally, no discussions of respondents' representativeness, reasons for nonresponse, and generalizability of the results have been pinpointed as drawbacks of some survey reports. The revealed deficiencies have justified the need for survey designing and data processing in line with reporting recommendations, including those listed on the EQUATOR Network website ( https://www.equator-network.org/ ).

Arguably, survey studies lack discipline-specific and globally-acceptable reporting guidance. The diversity of surveyed subjects and populations is perhaps the main confounder. Although most questionnaires contain socio-demographic questions, there are no reporting guidelines specifically tailored to comprehensively inquire specialists across different academic disciplines, patients, and public representatives.

The EQUATOR Network platform currently lists some widely promoted documents with statements on conducting and reporting web-based and non-web-based surveys ( Table 1 ). 11 , 12 , 13 , 14 The oldest published recommendation guides on postal, face-to-face, and telephone interviews. 1 One of its critical points highlights the need to formulate a clear and explicit question/objective to run a focused survey and to design questionnaires with respondent-friendly layout and content. 1 The Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is the most-used document for reporting online surveys. 11 The CHERRIES checklist included points on ensuring the reliability of online surveys and avoiding manipulations with multiple entries by the same users. 11 A specific set of recommendations, listed by the EQUATOR Network, is available for specialists who plan web-based and non-web-based surveys of knowledge, attitude, and practice in clinical medicine. 12 These recommendations help design valid questionnaires, survey representative subjects with clinical knowledge, and complete transparent reporting of the obtained results. 12

ReferencesGuideline titles and acronymsDescriptionsLimitationsEQUATOR Network listing
Kelley et al., 2003 Good practice in the conduct and reporting of survey researchThe checklist and recommendations focus on designing questionnaires and ensuring the reliability of non-web-based surveys only.The checklist and recommendations are not based on the Delphi method.+
Eysenbach, 2004 Checklist for Reporting Results of Internet E-Surveys (CHERRIES)The CHERRIES checklist focuses on web-based surveys. It ensures the reliability and representativeness of online responses and prevents duplicate/multiple entries by the same users. It is the top-cited e-survey checklist.This checklist is not based on an expert panel consensus (Delphi method). It does not cover all parts of e-survey reports.+
Burns et al., 2008 A guide for the design and conduct of self-administered surveys of cliniciansThis guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice.The statements are based on a literature review, but not the Delphi method.+
Sharma et al., 2021 Consensus-based Checklist for Reporting of Survey Studies (CROSS)This is a checklist with 19 sections covering all parts of web- and non-web-based survey reports. It is based on the Delphi method with 3 survey rounds in January 2018 -December 2019 and 24 experts responding to the 1 round.Although 24 experts with numerous related publications were initially enrolled, 6 of them were lost to follow-up.+
Gaur et al., 2020 Reporting survey based studies - a primer for authorsThese recommendations cover points on planning and reporting surveys in the COVID-19 pandemic. Various online platforms, including social media, for distributing questionnaires and conducting surveys are presented.Although these recommendations are based on a comprehensive literature review, statements are not discussed with a panel of experts and lack Delphi consensus agreements.-

COVID-19 = coronavirus disease 2019.

From January 2018 to December 2019, three rounds of surveying experts with interest in surveys and questionnaires allowed reaching consensus on a set of points for reporting web-based and non-web-based surveys. 13 The Consensus-Based Checklist for Reporting of Survey Studies included a rating of 19 items of survey reports, from titles to acknowledgments. 13 Finally, rapid recommendations on online surveys amid the coronavirus disease 2019 (COVID-19) pandemic were published to guide the authors on how to choose social media and other online platforms for disseminating questionnaires and targeting representative groups of respondents. 14

Adhering to a combination of these recommendations is advisable to minimize the limitations of each document and increase the transparency of survey reports. For cross-sectional analyses of large sample sizes, additionally consulting the STROBE standard of the EQUATOR Network may further improve the accuracy of reporting respondents' inclusion and exclusion criteria. In fact, there are examples of online survey reports adhering to both CHERRIES and STROBE recommendations. 15 , 16

ETHICS CONSIDERATIONS

Although health research authorities in some countries lack mandates for full ethics review of survey studies, obtaining formal review protocols or ethics waivers is advisable for most surveys involving respondents from more than one country. And following country-based regulations and ethical norms of research are therefore mandatory. 14 , 17

Full ethics review or exemption procedures are important steps for planning and conducting ethically sound surveys. Given the non-interventional origin and absence of immediate health risks for participants, ethics committees may approve survey protocols without a full ethics review. 18 A full ethics review is however required when the informational and psychological harms of surveys increase the risk. 18 Informational harms may result from unauthorized access to respondents' personal data and stigmatization of respondents with leaked information about social diseases. Psychological harms may include anxiety, depression, and exacerbation of underlying psychiatric diseases.

Survey questionnaires submitted for evaluation should indicate how informed consent is obtained from respondents. 13 Additionally, information about confidentiality, anonymity, questionnaire delivery modes, compensations, and mechanisms preventing unauthorized access to questionnaires should be provided. 13 , 14 Ethical considerations and validation are especially important in studies involving vulnerable and marginalized subjects with diminished autonomy and poor social status due to dementia, substance abuse, inappropriate sexual behavior, and certain infections. 18 , 19 , 20 Precautions should be taken to avoid confidentiality breaches and bot activities when surveying via insecure online platforms. 21

Monetary compensation helps attract respondents to fill out lengthy questionnaires. However, such incentives may create mechanisms deceiving the system by surveyees with a primary interest in compensation. 22 Ethics review protocols may include points on recording online responders' IP addresses and blocking duplicate submissions from the same Internet locations. 22 IP addresses are viewed as personal information in the EU, but not in the US. Notably, IP identification may deter some potential responders in the EU. 21

PATIENT KNOWLEDGE AND PERCEPTION SURVEYS

The design of patient knowledge and perception surveys is insufficiently defined and poorly explored. Although such surveys are aimed at consistently covering research questions on clinical presentation, prevention, and treatment, more emphasis is now placed on psychometric aspects of designing related questionnaires. 23 , 24 , 25 Targeting responsive patient groups to collect reliable answers is yet another challenge that can be addressed by distributing questionnaires to patients with good knowledge of their diseases, particularly those registering with university-affiliated clinics and representing patient associations. 26 , 27 , 28

The structure of questionnaires may differ for surveys of patient groups with various age-dependent health issues. Care should be taken when children are targeted since they often report a variety of modifiable conditions such as anxiety and depression, musculoskeletal problems, and pain, affecting their quality of life. 29 Likewise, gender and age differences should be considered in questionnaires addressing the quality of life in association with mental health and social status. 30 Questionnaires for older adults may benefit from including questions about social support and assistance in the context of caring for aging diseases. 31 Finally, addressing the needs of digital technologies and home-care applications may help to ensure the completeness of questionnaires for older adults with sedentary lifestyles and mobility disabilities. 32 , 33

SOCIAL MEDIA FOR QUESTIONNAIRE DISTRIBUTION

The widespread use of social media has made it easier to distribute questionnaires to a large number of potential responders. Employing popular platforms such as Twitter and Facebook has become particularly useful for conducting nationwide surveys on awareness and concerns about global health and pandemic issues. 34 , 35 When various social media platforms are simultaneously employed, participants' sociodemographic factors such as gender, age, and level of education may confound the study results. 36 Knowing targeted groups' preferred online networking and communication sites may better direct the questionnaire distribution. 37 , 38 , 39

Preliminary evidence suggests that distributing survey links via social-media accounts of individual users and organized e-groups with interest in specific health issues may increase their engagement and correctness of responses. 40 , 41

Since surveys employing social media are publicly accessible, related questionnaires should be professionally edited to easily inquire target populations, avoid sensitive and disturbing points, and ensure privacy and confidentiality. 42 , 43 Although counting e-post views is feasible, response rates of social-media distributed questionnaires are practically impossible to record. The latter is an inherent limitation of such surveys.

SURVEY SAMPLING

Establishing connections with target populations and diversifying questionnaire dissemination may increase the rigor of current surveys which are abundantly administered. 44 Sample sizes depend on various factors, including the chosen topic, aim, and sampling strategy (random or non-random). 12 Some topics such as COVID-19 and global health may easily attract the attention of large respondent groups motivated to answer a variety of questionnaire questions. In the beginning of the pandemic, most surveys employed non-random (non-probability) sampling strategies which resulted in analyses of numerous responses without response rate calculations. These qualitative research studies were mainly aimed to analyze opinions of specialists and patients exposed to COVID-19 to develop rapid guidelines and initiate clinical trials.

Outside the pandemic, and beyond hot topics, there is a growing trend of low response rates and inadequate representation of target populations. 45 Such a trend makes it difficult to design and conduct random (probability) surveys. Subsequently, hypotheses of current online surveys often omit points on randomization and sample size calculation, ending up with qualitative analyses and pilot studies. In fact, convenience (non-random or non-probability) sampling can be particularly suitable for previously unexplored and emerging topics when overviewing literature cannot help estimate optimal samples and entirely new questionnaires should be designed and tested. The limitations of convenience sampling minimize the generalizability of the conclusions since the sample representativeness is uncertain. 45

Researchers often employ 'snowball' sampling techniques with initial surveyees forwarding the questionnaires to other interested respondents, thereby maximizing the sample size. Another common technique for obtaining more responses relies on generating regular social media reminders and resending e-mails to interested individuals and groups. Such tactics can increase the study duration but cannot exclude the participation bias and non-response.

Purposive or targeted sampling is perhaps the most precise technique when knowing the target population size and respondents' readiness to correctly fill the questionnaires and ensure an exact estimate of response rate, close to 100%. 46

DESIGNING QUESTIONNAIRES

Correctness, confidentiality, privacy, and anonymity are critical points of inquiry in questionnaires. 47 Correctly worded and convincingly presented survey invitations with consenting options and reassurances of secure data processing may increase response rates and ensure the validity of responses. 47 Online surveys are believed to be more advantageous than offline inquiries for ensuring anonymity and privacy, particularly for targeting socially marginalized and stigmatized subjects. Online study design is indeed optimal for collecting more responses in surveys of sex- and gender-related and otherwise sensitive topics.

Performing comprehensive literature reviews, consultations with subject experts, and Delphi exercises may all help to specify survey objectives, identify questionnaire domains, and formulate pertinent questions. Literature searches are required for in-depth topic coverage and identification of previously published relevant surveys. By analyzing previous questionnaire characteristics, modifications can be made to designing new self-administered surveys. The justification of new studies should correctly acknowledge similar published reports to avoid redundancies.

The initial part of a questionnaire usually includes a short introduction/preamble/cover letter that specifies the objectives, target respondents, potential benefits and risks, and moderators' contact details for further inquiries. This part may motivate potential respondents to consent and answer questions. The specifics, volume, and format of other parts are dependent on revisions in response to pretesting and pilot testing. 48 The pretesting usually involves co-authors and other contributors, colleagues with the subject interest while the pilot testing usually involves 5-10 target respondents who are well familiar with the subject and can swiftly complete the questionnaires. The guidance obtained at the pretesting and pilot testing allows editing, shortening, or expanding questionnaire sections. Although guidance on questionnaire length and question numbers is scarce, some experts empirically consider 5 domains with 5 questions in each as optimal. 12 Lengthy questionnaires may be biased due to respondents' fatigue and inability to answer numerous and complicated questions. 46

Questionnaire revisions are aimed at ensuring the validity and consistency of questions, implying the appeal to relevant responders and accurate covering of all essential points. 45 Valid questionnaires enable reliable and reproducible survey studies that end up with the same responses to variably worded and located questions. 45

Various combinations of open-ended and close-ended questions are advisable to comprehensively cover all pertinent points and enable easy and quick completion of questionnaires. Open-ended questions are usually included in small numbers since these require more time to respond. 46 Also, the interpretation and analysis of responses to open-ended questions hardly contribute to generating robust qualitative data. 49 Close-ended questions with single and multiple-choice answers constitute the main part of a questionnaire, with single answers easier to analyze and report. Questions with single answers can be presented as 3 or more Likert scales (e.g., yes/no/do not know).

Avoiding too simplistic (yes/no) questions and replacing them with Likert-scale items may increase the robustness of questionnaire analyses. 50 Additionally, constructing easily understandable questions, excluding merged items with two or more points, and moving sophisticated questions to the beginning of a questionnaire may add to the quality and feasibility of the study. 50

Survey studies are increasingly conducted by health professionals to swiftly explore opinions on a wide range of topics by diverse groups of specialists, patients, and public representatives. Arguably, quality surveys with generalizable results can be instrumental for guiding health practitioners in times of crises such as the COVID-19 pandemic when clinical trials, systematic reviews, and other evidence-based reports are scarcely available or absent. Online surveys can be particularly valuable for collecting and analyzing specialist, patient, and other subjects' responses in non-mainstream science countries where top evidence-based studies are scarce commodities and research funding is limited. Accumulated expertise in drafting quality questionnaires and conducting robust surveys is valuable for producing new data and generating new hypotheses and research questions.

The main advantages of surveys are related to the ease of conducting such studies with limited or no research funding. The digitization and social media advances have further contributed to the ease of surveying and growing global interest toward surveys among health professionals. Some of the disadvantages of current surveys are perhaps those related to imperfections of digital platforms for disseminating questionnaires and analysing responses.

Although some survey reporting standards and recommendations are available, none of these comprehensively cover all items of questionnaires and steps in surveying. None of the survey reporting standards is based on summarizing guidance of a large number of contributors involved in related research projects. As such, presenting the current guidance with a list of items for survey reports ( Table 2 ) may help better design and publish related articles.

No.ItemsNotes
1Title• Reflect on the survey subject, target respondents (e.g., patients, specialists, public representatives), obtained results, and study design (online, non-web-based, cross-sectional, longitudinal).
2Abstract• Provide a structured abstract with an introduction, aims, results, and conclusion.
3Keywords• Add the term "surveys and questionnaires" along with subject keywords to increase retrieval of the survey report.
4Introduction• Analyze available evidence, relevant reviews, and surveys to justify the need for current study and questionnaire sections.
5Aim• Present specific and innovative aims.
6Methods• Highlight study design (e.g., web-based, non-web-based, cross-sectional, longitudinal).
• Specify the survey datelines and characterize time periods (data collection during a crisis [pandemic, wartime] or certain global movements, campaigns, or interventions).
• Describe the surveyed respondents’ characteristics.
• Characterize the questionnaire domains and the number of questions in each domain.
• Provide details of preserving confidentiality and anonymity
• Describe pretesting and pilot testing (experts and respondents involved), the number of revision rounds, and the average time for filling out the questionnaire.
• Report content and face validity (quality, completeness, and feasibility of the questionnaire and its appeal to relevant respondents).
• Add details of an employed survey platform for web-based surveys (e.g., SurveyMonkey, Google Forms, etc.).
• Report modes of questionnaire distribution (e.g., via certain social media channels, emails, face-to-face interviews, and postal mail).
• Clarify when and how many times survey reminders were circulated.
7Adherence to research reporting standards• Refer to recommendations or their combinations consulted for reporting.
8Ethics section• Provide ethics committee approval/waiver date, protocol number, and name of the ethics committee.
• Refer to documents of national health research authorities that regulate the ethics review waiver/exemption.
• Justify the ethics review exemption in view of the survey's non-interventional origin and absence of informational and psychological risks/harms.
• Provide details of monetary or other incentives, written informed consents, confidentiality and anonymity, and mechanisms to avoid multiple entries by the same respondents.
9Statistical analyses• Report descriptive statistics, how categorical data were compared (chi-square or Fisher's exact tests), whether parametric and non-parametric tests and regression analyses were employed, level of significance, and statistical package used.
10Results• Report response rates in absolute numbers and percentages if the target population was established by methods other than convenience sampling.
• Reflect on missing data.
• Provide respondents' details to characterize their representativeness and exclude/minimize nonresponse influence.
• Insert eye-catching and color graphs and informative tables pointing to the most remarkable results, without recapitulating the same data in the text.
11Discussion• Clarify what is new.
• Analyze limitations by reflecting on low response rate, small sample size, non-response, missing data, a long timeline of collecting responses, language of the questionnaire other than English, and generalizability of the survey results.
12Author contributions and acknowledgements• Identify the authors who drafted the questionnaire and survey report.
• List non-author/technical contributions for questionnaire dissemination, promotion, and data collection.
13Disclosure of interests• Disclose potential conflicts which may affect the validity and reliability of the survey.
14Funding• Report funding sources, provision of software, and open-access funding, if available.
15Open data sharing• Add a note about the availability of data for post-publication analyses.
16Appendix• Submit an English version of the questionnaire.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Zimba O.
  • Formal analysis: Zimba O, Gasparyan AY.
  • Writing - original draft: Zimba O.
  • Writing - review & editing: Zimba O, Gasparyan AY.

SSRIC

Chapter 3 -- Survey Research Design and Quantitative Methods of Analysis for Cross-sectional Data

Almost everyone has had experience with surveys. Market surveys ask respondents whether they recognize products and their feelings about them. Political polls ask questions about candidates for political office or opinions related to political and social issues. Needs assessments use surveys that identify the needs of groups. Evaluations often use surveys to assess the extent to which programs achieve their goals. Survey research is a method of collecting information by asking questions. Sometimes interviews are done face-to-face with people at home, in school, or at work. Other times questions are sent in the mail for people to answer and mail back. Increasingly, surveys are conducted by telephone. SAMPLE SURVEYS Although we want to have information on all people, it is usually too expensive and time consuming to question everyone. So we select only some of these individuals and question them. It is important to select these people in ways that make it likely that they represent the larger group. The population is all the individuals in whom we are interested. (A population does not always consist of individuals. Sometimes, it may be geographical areas such as all cities with populations of 100,000 or more. Or we may be interested in all households in a particular area. In the data used in the exercises of this module the population consists of individuals who are California residents.) A sample is the subset of the population involved in a study. In other words, a sample is part of the population. The process of selecting the sample is called sampling . The idea of sampling is to select part of the population to represent the entire population. The United States Census is a good example of sampling. The census tries to enumerate all residents every ten years with a short questionnaire. Approximately every fifth household is given a longer questionnaire. Information from this sample (i.e., every fifth household) is used to make inferences about the population. Political polls also use samples. To find out how potential voters feel about a particular race, pollsters select a sample of potential voters. This module uses opinions from three samples of California residents age 18 and over. The data were collected during July, 1985, September, 1991, and February, 1995, by the Field Research Corporation (The Field Institute 1985, 1991, 1995). The Field Research Corporation is a widely-respected survey research firm and is used extensively by the media, politicians, and academic researchers. Since a survey can be no better than the quality of the sample, it is essential to understand the basic principles of sampling. There are two types of sampling-probability and nonprobability. A probability sample is one in which each individual in the population has a known, nonzero chance of being selected in the sample. The most basic type is the simple random sample . In a simple random sample, every individual (and every combination of individuals) has the same chance of being selected in the sample. This is the equivalent of writing each person's name on a piece of paper, putting them in plastic balls, putting all the balls in a big bowl, mixing the balls thoroughly, and selecting some predetermined number of balls from the bowl. This would produce a simple random sample. The simple random sample assumes that we can list all the individuals in the population, but often this is impossible. If our population were all the households or residents of California, there would be no list of the households or residents available, and it would be very expensive and time consuming to construct one. In this type of situation, a multistage cluster sample would be used. The idea is very simple. If we wanted to draw a sample of all residents of California, we might start by dividing California into large geographical areas such as counties and selecting a sample of these counties. Our sample of counties could then be divided into smaller geographical areas such as blocks and a sample of blocks would be selected. We could then construct a list of all households for only those blocks in the sample. Finally, we would go to these households and randomly select one member of each household for our sample. Once the household and the member of that household have been selected, substitution would not be allowed. This often means that we must call back several times, but this is the price we must pay for a good sample. The Field Poll used in this module is a telephone survey. It is a probability sample using a technique called random-digit dialing . With random-digit dialing, phone numbers are dialed randomly within working exchanges (i.e., the first three digits of the telephone number). Numbers are selected in such a way that all areas have the proper proportional chance of being selected in the sample. Random-digit dialing makes it possible to include numbers that are not listed in the telephone directory and households that have moved into an area so recently that they are not included in the current telephone directory. A nonprobability sample is one in which each individual in the population does not have a known chance of selection in the sample. There are several types of nonprobability samples. For example, magazines often include questionnaires for readers to fill out and return. This is a volunteer sample since respondents self-select themselves into the sample (i.e., they volunteer to be in the sample). Another type of nonprobability sample is a quota sample . Survey researchers may assign quotas to interviewers. For example, interviewers might be told that half of their respondents must be female and the other half male. This is a quota on sex. We could also have quotas on several variables (e.g., sex and race) simultaneously. Probability samples are preferable to nonprobability samples. First, they avoid the dangers of what survey researchers call "systematic selection biases" which are inherent in nonprobability samples. For example, in a volunteer sample, particular types of persons might be more likely to volunteer. Perhaps highly-educated individuals are more likely to volunteer to be in the sample and this would produce a systematic selection bias in favor of the highly educated. In a probability sample, the selection of the actual cases in the sample is left to chance. Second, in a probability sample we are able to estimate the amount of sampling error (our next concept to discuss). We would like our sample to give us a perfectly accurate picture of the population. However, this is unrealistic. Assume that the population is all employees of a large corporation, and we want to estimate the percent of employees in the population that is satisfied with their jobs. We select a simple random sample of 500 employees and ask the individuals in the sample how satisfied they are with their jobs. We discover that 75 percent of the employees in our sample are satisfied. Can we assume that 75 percent of the population is satisfied? That would be asking too much. Why would we expect one sample of 500 to give us a perfect representation of the population? We could take several different samples of 500 employees and the percent satisfied from each sample would vary from sample to sample. There will be a certain amount of error as a result of selecting a sample from the population. We refer to this as sampling error . Sampling error can be estimated in a probability sample, but not in a nonprobability sample. It would be wrong to assume that the only reason our sample estimate is different from the true population value is because of sampling error. There are many other sources of error called nonsampling error . Nonsampling error would include such things as the effects of biased questions, the tendency of respondents to systematically underestimate such things as age, the exclusion of certain types of people from the sample (e.g., those without phones, those without permanent addresses), or the tendency of some respondents to systematically agree to statements regardless of the content of the statements. In some studies, the amount of nonsampling error might be far greater than the amount of sampling error. Notice that sampling error is random in nature, while nonsampling error may be nonrandom producing systematic biases. We can estimate the amount of sampling error (assuming probability sampling), but it is much more difficult to estimate nonsampling error. We can never eliminate sampling error entirely, and it is unrealistic to expect that we could ever eliminate nonsampling error. It is good research practice to be diligent in seeking out sources of nonsampling error and trying to minimize them.   DATA ANALYSIS Examining Variables One at a Time (Univariate Analysis) The rest of this chapter will deal with the analysis of survey data . Data analysis involves looking at variables or "things" that vary or change. A variable is a characteristic of the individual (assuming we are studying individuals). The answer to each question on the survey forms a variable. For example, sex is a variable-some individuals in the sample are male and some are female. Age is a variable; individuals vary in their ages. Looking at variables one at a time is called univariate analysis . This is the usual starting point in analyzing survey data. There are several reasons to look at variables one at a time. First, we want to describe the data. How many of our sample are men and how many are women? How many are black and how many are white? What is the distribution by age? How many say they are going to vote for Candidate A and how many for Candidate B? How many respondents agree and how many disagree with a statement describing a particular opinion? Another reason we might want to look at variables one at a time involves recoding. Recoding is the process of combining categories within a variable. Consider age, for example. In the data set used in this module, age varies from 18 to 89, but we would want to use fewer categories in our analysis, so we might combine age into age 18 to 29, 30 to 49, and 50 and over. We might want to combine African Americans with the other races to classify race into only two categories-white and nonwhite. Recoding is used to reduce the number of categories in the variable (e.g., age) or to combine categories so that you can make particular types of comparisons (e.g., white versus nonwhite). The frequency distribution is one of the basic tools for looking at variables one at a time. A frequency distribution is the set of categories and the number of cases in each category. Percent distributions show the percentage in each category. Table 3.1 shows frequency and percent distributions for two hypothetical variables-one for sex and one for willingness to vote for a woman candidate. Begin by looking at the frequency distribution for sex. There are three columns in this table. The first column specifies the categories-male and female. The second column tells us how many cases there are in each category, and the third column converts these frequencies into percents. Table 3.1 -- Frequency and Percent Distributions for Sex and Willingness to Vote for a Woman Candidate (Hypothetical Data) Sex Voting Preference Category  Freq.  Percent  Category  Freq.  Percent  Valid Percent  Male  380  40.0  Willing to Vote for a Woman  460  48.4  51.1  Female  570  60.0  Not Willing to Vote for a Woman  440  46.3  48.9  Total  950  100.0  Refused  50  5.3  Missing  Total  950  100.0  100.0  In this hypothetical example, there are 380 males and 570 females or 40 percent male and 60 percent female. There are a total of 950 cases. Since we know the sex for each case, there are no missing data (i.e., no cases where we do not know the proper category). Look at the frequency distribution for voting preference in Table 3.1. How many say they are willing to vote for a woman candidate and how many are unwilling? (Answer: 460 willing and 440 not willing) How many refused to answer the question? (Answer: 50) What percent say they are willing to vote for a woman, what percent are not, and what percent refused to answer? (Answer: 48.4 percent willing to vote for a woman, 46.3 percent not willing, and 5.3 percent refused to tell us.) The 50 respondents who didn't want to answer the question are called missing data because we don't know which category into which to place them, so we create a new category (i.e., refused) for them. Since we don't know where they should go, we might want a percentage distribution considering only the 900 respondents who answered the question. We can determine this easily by taking the 50 cases with missing information out of the base (i.e., the denominator of the fraction) and recomputing the percentages. The fourth column in the frequency distribution (labeled "valid percent") gives us this information. Approximately 51 percent of those who answered the question were willing to vote for a woman and approximately 49 percent were not. With these data we will use frequency distributions to describe variables one at a time. There are other ways to describe single variables. The mean, median, and mode are averages that may be used to describe the central tendency of a distribution. The range and standard deviation are measures of the amount of variability or dispersion of a distribution. (We will not be using measures of central tendency or variability in this module.)   Exploring the Relationship Between Two Variables (Bivariate Analysis) Usually we want to do more than simply describe variables one at a time. We may want to analyze the relationship between variables. Morris Rosenberg (1968:2) suggests that there are three types of relationships: "(1) neither variable may influence one another .... (2) both variables may influence one another ... (3) one of the variables may influence the other." We will focus on the third of these types which Rosenberg calls "asymmetrical relationships." In this type of relationship, one of the variables (the independent variable ) is assumed to be the cause and the other variable (the dependent variable ) is assumed to be the effect. In other words, the independent variable is the factor that influences the dependent variable. For example, researchers think that smoking causes lung cancer. The statement that specifies the relationship between two variables is called a hypothesis (see Hoover 1992, for a more extended discussion of hypotheses). In this hypothesis, the independent variable is smoking (or more precisely, the amount one smokes) and the dependent variable is lung cancer. Consider another example. Political analysts think that income influences voting decisions, that rich people vote differently from poor people. In this hypothesis, income would be the independent variable and voting would be the dependent variable. In order to demonstrate that a causal relationship exists between two variables, we must meet three criteria: (1) there must be a statistical relationship between the two variables, (2) we must be able to demonstrate which one of the variables influences the other, and (3) we must be able to show that there is no other alternative explanation for the relationship. As you can imagine, it is impossible to show that there is no other alternative explanation for a relationship. For this reason, we can show that one variable does not influence another variable, but we cannot prove that it does. We can only show that it is more plausible or credible to believe that a causal relationship exists. In this section, we will focus on the first two criteria and leave this third criterion to the next section. In the previous section we looked at the frequency distributions for sex and voting preference. All we can say from these two distributions is that the sample is 40 percent men and 60 percent women and that slightly more than half of the respondents said they would be willing to vote for a woman, and slightly less than half are not willing to. We cannot say anything about the relationship between sex and voting preference. In order to determine if men or women are more likely to be willing to vote for a woman candidate, we must move from univariate to bivariate analysis. A crosstabulation (or contingency table ) is the basic tool used to explore the relationship between two variables. Table 3.2 is the crosstabulation of sex and voting preference. In the lower right-hand corner is the total number of cases in this table (900). Notice that this is not the number of cases in the sample. There were originally 950 cases in this sample, but any case that had missing information on either or both of the two variables in the table has been excluded from the table. Be sure to check how many cases have been excluded from your table and to indicate this figure in your report. Also be sure that you understand why these cases have been excluded. The figures in the lower margin and right-hand margin of the table are called the marginal distributions. They are simply the frequency distributions for the two variables in the whole table. Here, there are 360 males and 540 females (the marginal distribution for the column variable-sex) and 460 people who are willing to vote for a woman candidate and 440 who are not (the marginal distribution for the row variable-voting preference). The other figures in the table are the cell frequencies. Since there are two columns and two rows in this table (sometimes called a 2 x 2 table), there are four cells. The numbers in these cells tell us how many cases fall into each combination of categories of the two variables. This sounds complicated, but it isn't. For example, 158 males are willing to vote for a woman and 302 females are willing to vote for a woman. Table 3.2 -- Crosstabulation of Sex and Voting Preference (Frequencies)   Sex Voting Preference Male  Female  Total  Willing to Vote for a Woman 158  302  460  Not Willing to Vote for a Woman 202  238  440  Total 360  540  900  We could make comparisons rather easily if we had an equal number of women and men. Since these numbers are not equal, we must use percentages to help us make the comparisons. Since percentages convert everything to a common base of 100, the percent distribution shows us what the table would look like if there were an equal number of men and women. Before we percentage Table 3.2, we must decide which of these two variables is the independent and which is the dependent variable. Remember that the independent variable is the variable we think might be the influencing factor. The independent variable is hypothesized to be the cause, and the dependent variable is the effect. Another way to express this is to say that the dependent variable is the one we want to explain. Since we think that sex influences willingness to vote for a woman candidate, sex would be the independent variable. Once we have decided which is the independent variable, we are ready to percentage the table. Notice that percentages can be computed in different ways. In Table 3.3, the percentages have been computed so that they sum down to 100. These are called column percents . If they sum across to 100, they are called row percents . If the independent variable is the column variable, then we want the percents to sum down to 100 (i.e., we want the column percents). If the independent variable is the row variable, we want the percents to sum across to 100 (i.e., we want the row percents). This is a simple, but very important, rule to remember. We'll call this our rule for computing percents . Although we often see the independent variable as the column variable so the table sums down to 100 percent, it really doesn't matter whether the independent variable is the column or the row variable. In this module, we will put the independent variable as the column variable. Many others (but not everyone) use this convention. It would be helpful if you did this when you write your report. Table 3.3 -- Voting Preference by Sex (Percents) Voting Preference Male Female Total Willing to Vote for a Woman 43.9  55.9  51.1  Not Willing to Vote for a Woman 56.1  44.1  100.0  Total Percent 100.0  100.0  100.0  (Total Frequency) (360)  (540)  (900)  Now we are ready to interpret this table. Interpreting a table means to explain what the table is saying about the relationship between the two variables. First, we can look at each category of the independent variable separately to describe the data and then we compare them to each other. Since the percents sum down to 100 percent, we describe down and compare across. The rule for interpreting percents is to compare in the direction opposite to the way the percents sum to 100. So, if the percents sum down to 100, we compare across, and if the percents sum across to 100, compare down. If the independent variable is the column variable, the percents will always sum down to 100. We can look at each category of the independent variable separately to describe the data and then compare them to each other-describe down and then compare across. In Table 3.3, row one shows the percent of males and the percent of females who are willing to vote for a woman candidate--43.9 percent of males are willing to vote for a woman, while 55.9 percent of the females are. This is a difference of 12 percentage points. Somewhat more females than males are willing to vote for a woman. The second row shows the percent of males and females who are not willing to vote for a woman. Since there are only two rows, the second row will be the complement (or the reverse) of the first row. It shows that males are somewhat more likely to be unwilling to vote for a woman candidate (a difference of 12 percentage points in the opposite direction). When we observe a difference, we must also decide whether it is significant. There are two different meanings for significance-statistical significance and substantive significance. Statistical significance considers whether the difference is great enough that it is probably not due to chance factors. Substantive significance considers whether a difference is large enough to be important. With a very large sample, a very small difference is often statistically significant, but that difference may be so small that we decide it isn't substantively significant (i.e., it's so small that we decide it doesn't mean very much). We're going to focus on statistical significance, but remember that even if a difference is statistically significant, you must also decide if it is substantively significant. Let's discuss this idea of statistical significance. If our population is all men and women of voting age in California, we want to know if there is a relationship between sex and voting preference in the population of all individuals of voting age in California. All we have is information about a sample from the population. We use the sample information to make an inference about the population. This is called statistical inference . We know that our sample is not a perfect representation of our population because of sampling error . Therefore, we would not expect the relationship we see in our sample to be exactly the same as the relationship in the population. Suppose we want to know whether there is a relationship between sex and voting preference in the population. It is impossible to prove this directly, so we have to demonstrate it indirectly. We set up a hypothesis (called the null hypothesis ) that says that sex and voting preference are not related to each other in the population. This basically says that any difference we see is likely to be the result of random variation. If the difference is large enough that it is not likely to be due to chance, we can reject this null hypothesis of only random differences. Then the hypothesis that they are related (called the alternative or research hypothesis ) will be more credible.
- f )  - f ) - f ) /f
12.52 = chi square
In the first column of Table 3.4, we have listed the four cell frequencies from the crosstabulation of sex and voting preference. We'll call these the observed frequencies (f o ) because they are what we observe from our table. In the second column, we have listed the frequencies we would expect if, in fact, there is no relationship between sex and voting preference in the population. These are called the expected frequencies (f e ). We'll briefly explain how these expected frequencies are obtained. Notice from Table 3.1 that 51.1 percent of the sample were willing to vote for a woman candidate, while 48.9 percent were not. If sex and voting preference are independent (i.e., not related), we should find the same percentages for males and females. In other words, 48.9 percent (or 176) of the males and 48.9 percent (or 264) of the females would be unwilling to vote for a woman candidate. (This explanation is adapted from Norusis 1997.) Now, we want to compare these two sets of frequencies to see if the observed frequencies are really like the expected frequencies. All we do is to subtract the expected from the observed frequencies (column three). We are interested in the sum of these differences for all cells in the table. Since they always sum to zero, we square the differences (column four) to get positive numbers. Finally, we divide this squared difference by the expected frequency (column five). (Don't worry about why we do this. The reasons are technical and don't add to your understanding.) The sum of column five (12.52) is called the chi square statistic . If the observed and the expected frequencies are identical (no difference), chi square will be zero. The greater the difference between the observed and expected frequencies, the larger the chi square. If we get a large chi square, we are willing to reject the null hypothesis. How large does the chi square have to be? We reject the null hypothesis of no relationship between the two variables when the probability of getting a chi square this large or larger by chance is so small that the null hypothesis is very unlikely to be true. That is, if a chi square this large would rarely occur by chance (usually less than once in a hundred or less than five times in a hundred). In this example, the probability of getting a chi square as large as 12.52 or larger by chance is less than one in a thousand. This is so unlikely that we reject the null hypothesis, and we conclude that the alternative hypothesis (i.e., there is a relationship between sex and voting preference) is credible (not that it is necessarily true, but that it is credible). There is always a small chance that the null hypothesis is true even when we decide to reject it. In other words, we can never be sure that it is false. We can only conclude that there is little chance that it is true. Just because we have concluded that there is a relationship between sex and voting preference does not mean that it is a strong relationship. It might be a moderate or even a weak relationship. There are many statistics that measure the strength of the relationship between two variables. Chi square is not a measure of the strength of the relationship. It just helps us decide if there is a basis for saying a relationship exists regardless of its strength. Measures of association estimate the strength of the relationship and are often used with chi square. (See Appendix D for a discussion of how to compute the two measures of association discussed below.) Cramer's V is a measure of association appropriate when one or both of the variables consists of unordered categories. For example, race (white, African American, other) or religion (Protestant, Catholic, Jewish, other, none) are variables with unordered categories. Cramer's V is a measure based on chi square. It ranges from zero to one. The closer to zero, the weaker the relationship; the closer to one, the stronger the relationship. Gamma (sometimes referred to as Goodman and Kruskal's Gamma) is a measure of association appropriate when both of the variables consist of ordered categories. For example, if respondents answer that they strongly agree, agree, disagree, or strongly disagree with a statement, their responses are ordered. Similarly, if we group age into categories such as under 30, 30 to 49, and 50 and over, these categories would be ordered. Ordered categories can logically be arranged in only two ways-low to high or high to low. Gamma ranges from zero to one, but can be positive or negative. For this module, the sign of Gamma would have no meaning, so ignore the sign and focus on the numerical value. Like V, the closer to zero, the weaker the relationship and the closer to one, the stronger the relationship. Choosing whether to use Cramer's V or Gamma depends on whether the categories of the variable are ordered or unordered. However, dichotomies (variables consisting of only two categories) may be treated as if they are ordered even if they are not. For example, sex is a dichotomy consisting of the categories male and female. There are only two possible ways to order sex-male, female and female, male. Or, race may be classified into two categories-white and nonwhite. We can treat dichotomies as if they consisted of ordered categories because they can be ordered in only two ways. In other words, when one of the variables is a dichotomy, treat this variable as if it were ordinal and use gamma. This is important when choosing an appropriate measure of association. In this chapter we have described how surveys are done and how we analyze the relationship between two variables. In the next chapter we will explore how to introduce additional variables into the analysis.   REFERENCES AND SUGGESTED READING Methods of Social Research Riley, Matilda White. 1963. Sociological Research I: A Case Approach . New York: Harcourt, Brace and World. Hoover, Kenneth R. 1992. The Elements of Social Scientific Thinking (5 th Ed.). New York: St. Martin's. Interviewing Gorden, Raymond L. 1987. Interviewing: Strategy, Techniques and Tactics . Chicago: Dorsey. Survey Research and Sampling Babbie, Earl R. 1990. Survey Research Methods (2 nd Ed.). Belmont, CA: Wadsworth. Babbie, Earl R. 1997. The Practice of Social Research (8 th Ed). Belmont, CA: Wadsworth. Statistical Analysis Knoke, David, and George W. Bohrnstedt. 1991. Basic Social Statistics . Itesche, IL: Peacock. Riley, Matilda White. 1963. Sociological Research II Exercises and Manual . New York: Harcourt, Brace & World. Norusis, Marija J. 1997. SPSS 7.5 Guide to Data Analysis . Upper Saddle River, New Jersey: Prentice Hall. Data Sources The Field Institute. 1985. California Field Poll Study, July, 1985 . Machine-readable codebook. The Field Institute. 1991. California Field Poll Study, September, 1991 . Machine-readable codebook. The Field Institute. 1995. California Field Poll Study, February, 1995 . Machine-readable codebook.

Document Viewers

  • Free PDF Viewer
  • Free Word Viewer
  • Free Excel Viewer
  • Free PowerPoint Viewer

Creative Commons License

Illustration

  • Basics of Research Process
  • Methodology
  • Survey Research Design: Definition, How to Conduct a Survey & Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Survey Research Design: Definition, How to Conduct a Survey & Examples

Survey research

Table of contents

Illustration

Use our free Readability checker

Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail, online, or in-person.

If your project involves live interaction with numerous people in order to obtain important data, you should know the basic rules of survey research beforehand. Today we’ll talk about this research type, review the step-by-step guide on how to do a survey research and try to understand main advantages and potential pitfalls. The following important questions will be discussed below:

  • Purpose and techniques of information collection.
  • Kinds of responses.
  • Analysis techniques, assumptions, and conclusions.

Do you wish to learn best practices of survey conducting? Stay with our research paper service and get prepared for some serious reading!

What Is Survey Research: Definition

Let’s define the notion of survey research first. It revolves around surveys you conduct to retrieve certain data from your respondents. The latter is to be carefully selected from some population that for particular reasons possess the data necessary for your research. For example, they can be witnesses of some event that you should investigate. Surveys contain a set of predefined questions, closed- or open-ended. They can be sent to participants who would answer them and thus provide you with data for your research. There are many methods for organizing surveys and processing the obtained information.

Purpose of Survey Research Design

Purpose of survey research is to collect proper data and thus get insights for your research. You should pick participants with relatable experience. It should be done in order to get relevant information from them. Questions in your survey should be formulated in a way that allows getting as much useful data as possible. The format of a survey should be adjusted to the situation. It will ensure your respondents will be ready to give their answers. It can be a questionnaire sent over email or questions asked during a phone call.

Surveys Research Methods

Which survey research method to choose? Let’s review the most popular approaches and when to use them. There are two critical factors that define how a survey will be conducted

  • Tool to send questions
  • online: using web forms or email questionnaires.
  • phone: reaching out to respondents individually. Sometimes using an automated service.
  • face-to-face: interviewing respondents in the real world. This makes room for more in-depth questions.
  • Time to conduct research
  • short-term periods.
  • long-term periods.

Let’s explore the time-related methods in detail.

Cross-Sectional Survey Design Research

The first type is cross sectional survey research. Design of this survey type includes collecting various insights from an audience within a specific short time period. It is used for descriptive analysis of a subject. The purpose is to provide quick conclusions or assumptions. Which is why this approach relies on fast data gathering and processing techniques.  Such surveys are typically implemented in sectors such as retail, education, healthcare etc, where the situation tends to change fast. So it is important to obtain operational results as soon as possible.

Longitudinal Survey Research

Let’s talk about survey research designs . Planning a design beforehand is crucial. It is crucial in case you are pressed on time or have a limited budget. Collecting information using a properly designed survey research is typically more effective and productive compared with a casually conducted study.  Preparation of a survey design includes the following major steps:

  • Understand the aim of your research. So that you can better plan the entire path of a survey and avoid obvious issues.
  • Pick a good sample from a population. Ensure precision of the results by selecting members who could provide useful insights and opinions.
  • Review available research methods. Decide about the one most suitable for your specific case.
  • Prepare a questionnaire. Selection of questions would directly affect the quality of your longitudinal analysis . So make sure to pick good questions. Also, avoid unnecessary ones to save time and counter possible errors.
  • Analyze results and make conclusions.

Advantages of Survey Research

As a rule, survey research involves getting data from people with first-hand knowledge about the research subject. Therefore, when formulated properly, survey questions should provide some unique insights and thus describe the subject better. Other benefits of this approach include:

  • Minimum investment. Online and automated call services require very low investment per respondent.
  • Versatile sources. Data can be collected by numerous means, allowing more flexibility.
  • Reliable for respondents. Anonymous surveys are secure. Respondents are more likely to answer honestly if they understand it will be confidential.

Types of Survey Research

Let’s review the main types of surveys. It is important to know about most popular templates. So that you wouldn’t have to develop your own ones from scratch for your specific case. Such studies are usually categorized by the following aspects:

  • Objectives.
  • Data source.
  • Methodology.

We’ll examine each of these aspects below, focusing on areas where certain types are used. 

Types of Survey Research Depending on Objective

Depending on your objective and the specifics of the subject’s context, the following survey research types can be used:

  • Predictive This approach foresees asking questions that automatically predict the best possible response options based on how they are formulated. As a result, it is often easier for respondents to provide their answers as they already have helpful suggestions.
  • Exploratory This approach is focused more on the discovery of new ideas and insights rather than collecting statistically accurate information. The results can be difficult to categorize and analyze. But this approach is very useful for finding a general direction for further research.
  • Descriptive This approach helps to define and describe your respondents' opinions or behavior more precisely. By predefining certain categories and designing survey questions, you obtain statistical data. This descriptive research approach is often used at later research stages. It is used in order to better understand the meaning of insights obtained at the beginning.

Types of Survey Research Depending on Data Source

The following research survey types can be defined based on which sources you obtain the data from:

  • Primary In this case, you collect information directly from the original source, e.g., learn about a natural disaster from a survivor. You aren’t using any intermediary instances. And, as a result, don't get any information twisted or lost on its way. This is the way to obtain the most valid and trustworthy results. But at the same time, it is often not so easy to access such sources.
  • Secondary This involves collecting data from existing research on the same subject that has been published. Such information is easier to access. But at the same time, it is usually too general and not tailored for your specific needs.

Types of Survey Research Depending on Methodology

Finally, let’s review survey research methodologies based on the format of retrieved and processed data. They can be:

  • Quantitative An approach that focuses on gathering numeric or measurable data from respondents. This provides enough material for statistical analysis. And then leads to some meaningful conclusions. Collection of such data requires properly designed surveys that include numeric options. It is important to take precautions to ensure that the data you’ve gathered is valid.
  • Qualitative Such surveys rely on opinions, impressions, reflections, and typical reactions of target groups. They should include open-ended questions to allow respondents to give detailed answers. It allows providing information that they consider most relevant. Qualitative research is used to understand, explain or evaluate some ideas or tendencies.

It is essential to differentiate these two kinds of research. That's why we prepared a special blog, which is about quantitative vs qualitative research .

How to Conduct a Survey Research: Main Steps

Now let’s find out how to do a survey step by step. Regardless of methods you use to design and conduct your survey, there are general guidelines that should be followed. The path is quite straightforward: 

  • Assess your goals and options for accessing necessary groups.
  • Formulate each question in a way that helps you obtain the most valuable data.
  • Plan and execute the distribution of the questions.
  • Process the results.

Let’s take a closer look at all these stages.

Step 1. Create a Clear Survey Research Question

Each survey research question should add some potential value to your expected results. Before formulating your questionnaire, it is better to invest some time analyzing your target populations. This will allow you to form proper samples of respondents. Big enough to get some insights from them but not too big at the same time. A good way to prepare questions is by constructing case studies for your subject. Analyzing case study examples in detail will help you understand which information about them is necessary.

Step 2. Choose a Type of Survey Research

As we’ve already learned, there are several different types of survey research. Starting with a close analysis of your subject, goals and available sources will help you understand which kinds of questions are to be distributed.  As a researcher, you’ll also need to analyze the features of the selected group of respondents. Pick a type that makes it easier to reach out to them. For example, if you should question a group of elderly people, online forms wouldn’t be efficient compared with interviews.

Step 3. Distribute the Questionnaire for Your Survey Research

The next step of survey research is the most decisive one. Now you should execute the plan you’ve created earlier. And then conduct the questioning of the entire group that was selected. If this is a group assignment, ask your colleagues or peers for help. Especially if you should deal with a big group of respondents. It is important to stick to the initial scenario but leave some room for improvisation in case there are difficulties with reaching out to respondents. After you collect all necessary responses, this data can be processed and analyzed.

Step 4. Analyze the Results of Your Research Survey

The data obtained during the survey research should be processed. So that you can use it for making assumptions and conclusions. If it is qualitative, you should conduct a thematic analysis to find important ideas and insights that could confirm your theories or expand your knowledge of the subject. Quantitative data can be analyzed manually or with the help of some program. Its purpose is to extract dependencies and trends from it to confirm or refute existing assumptions.

Step 5. Save the Results of Your Survey Research

The final step is to compose a survey research paper in order to get your results ordered. This way none of them would be lost especially if you save some copies of the paper. Depending on your assignment and on which stage you are at, it can be a dissertation, a thesis or even an illustrative essay where you explain the subject to your audience.  Each survey you’ve conducted must get a special section in your paper where you explain your methods and describe your results.

Survey Research Example

We have got a few research survey examples in case you would need some real world cases to illustrate the guidelines and tips provided above. Below is a sample research case with population and the purposes of researchers defined.

Example of survey research design The Newtown Youth Initiative will conduct a qualitative survey to develop a program to mitigate alcohol consumption by adolescent citizens of Newtown. Previously, cultural anthropology research was performed for studying mental constructs to understand young people's expectations from alcohol and their views on specific cultural values. Based on its results, a survey was designed to measure expectancies, cultural orientation among the adolescent population. A secure web page has been developed to conduct this survey and ensure anonymity of respondents. The Newtown Youth Initiative will partner with schools to share the link to this page with students and engage them to participate. Statistical analysis of differences in expectancies and cultural orientation between drinkers and non-drinkers will be performed using the data from this survey.

Survey Research: Key Takeaways

Today, we have explored the research survey notion and reviewed the main features of this research activity and its usage in the social sciences topics . Important techniques and tips have been reviewed. A step by step guide for conducting such studies has also been provided.

Illustration

Found it difficult to reach out to your target group? Or are you just pressed with deadlines? We've got your back! Check out our writing services and leave a ‘ write paper for me ’ request. We are a team of skilled authors with vast experience in various academic fields.

Frequently Asked Questions About Survey Research

1. what is a market research survey.

A market research survey can help a company understand several aspects of their target market. It typically involves picking focus groups of customers and asking them questions in order to learn about demand for specific products or services and understand whether it grows. Such feedback would be crucial for a company’s development. It can help it to plan its further strategic steps.

2. How does survey research differ from experimental research methods?

The main difference between experiment and survey research is that the latter means field research, while experiments are typically performed in laboratory conditions. When conducting surveys, researchers don’t have full control on the process and should adapt to the specific traits of their target groups in order to obtain answers from them. Besides, results of a study might be harder to quantify and turn into statistical values.

4. What is the difference between survey research and descriptive research?

The purpose of descriptive studies is to explain what the subject is and which features it has. Survey research may include descriptive information but is not limited by that. Typically it goes beyond descriptive statistics and includes qualitative research or advanced statistical methods used to draw inferences, find dependencies or build trends. On the other hand, descriptive methods don’t necessarily include questioning respondents, obtaining information from other sources.

3. What is good sample size for a survey?

It always depends on a specific case and researcher’s goals. However, there are some general guidelines and best practices for this activity. Good maximum sample size is usually around 10% of the population, as long as this does not exceed 1000 people. In any case, you should be mindful of your time and budget limitations when planning your actions. In case you’ve got a team to help you, it might be possible to process more data.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

Logo for Open Oregon Educational Resources

20 12. Survey design

Chapter outline.

  • What is a survey, and when should you use one? (14 minute read)
  • Collecting data using surveys (29 minute read)
  • Writing effective questions and questionnaires (38 minute read)
  • Bias and cultural considerations (22 minute read)

Content warning: examples in this chapter contain references to drug use, racism in politics, COVD-19, undocumented immigration, basic needs insecurity in higher education, school discipline, drunk driving, poverty, child sexual abuse, colonization and Global North/West hegemony, and ethnocentrism in science.

12.1 What is a survey, and when should you use one?

Learning objectives.

Learners will be able to…

  • Distinguish between survey as a research design and questionnaires used to measure concepts
  • Identify the strengths and weaknesses of surveys
  • Evaluate whether survey design fits with their research question

Students in my research methods classes often feel that surveys are self-explanatory. This feeling is understandable. Surveys are part of our everyday lives. Every time you call customer service, purchase a meal, or participate in a program, someone is handing you a survey to complete. Survey results are often discussed in the news, and perhaps you’ve even carried our a survey yourself. What could be so hard? Ask people a few quick questions about your research question and you’re done, right?

Students quickly learn that there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often many rounds of revision, but it is worth the effort. As we’ll learn in this section, there are many benefits to choosing survey research as your data collection method particularly for student projects. We’ll discuss what a survey is, its potential benefits and drawbacks, and what research projects are the best fit for survey design.

Is survey research right for your project?

To answer this question, the first thing we need to do is distinguish between a survey and a questionnaire. They might seem like they are the same thing, and in normal non-research contexts, they are used interchangeably. In this textbook, we define a survey  as a research design in which a researcher poses a set of predetermined questions to an entire group, or sample , of individuals. That set of questions is the questionnaire , a research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner. Basically, researchers use questionnaires as part of survey research. Questionnaires are the tool. Surveys are one research design for using that tool.

Let’s contrast how survey research uses questionnaires with the other quantitative design we will discuss in this book— experimental design . Questionnaires in experiments are called pretests and posttests and they measure how participants change over time as a result of an intervention (e.g., a group therapy session) or a stimulus (e.g., watching a video of a political speech) introduced by the researcher. We will discuss experiments in greater detail in Chapter 13 , but if testing an intervention or measuring how people react to something you do sounds like what you want to do with your project, experiments might be the best fit for you.

research design of a survey

Surveys, on the other hand, do not measure the impact of an intervention or stimulus introduced by the researcher. Instead, surveys look for patterns that already exist in the world based on how people self-report on a questionnaire. Self-report simply means that the participants in your research study are answering questions about themselves, regardless of whether they are presented on paper, electronically, or read aloud by the researcher. Questionnaires structure self-report data into a standardized format—with everyone receiving the exact same questions and answer choices in the same order [1] —which makes comparing data across participants much easier. Researchers using surveys try to influence their participants as little as possible because they want honest answers.

Questionnaires are completed by individual people, so the unit of observation is almost always individuals, rather than groups or organizations. Generally speaking, individuals provide the most informed data about their own lives and experiences, so surveys often also use individuals as the unit of analysis . Surveys are also helpful in analyzing dyads, families, groups, organizations, and communities, but regardless of the unit of analysis, the unit of observation for surveys is usually individuals. Keep this in mind as you think about sampling for your project.

In some cases, getting the most-informed person to complete your questionnaire may not be feasible . As we discussed in Chapter 2 and Chapter 6 , ethical duties to protect clients and vulnerable community members mean student research projects often study practitioners and other less-vulnerable populations rather than clients and community members. The ethical supervision needed via the IRB to complete projects that pose significant risks to participants takes time and effort, and as a result, student projects often rely on key informants like clinicians, teachers, and administrators who are less likely to be harmed by the survey. Key informants are people who are especially knowledgeable about your topic. If your study is about nursing, you should probably survey nurses. These considerations are more thoroughly addressed in Chapter 10 . Sometimes, participants complete surveys on behalf of people in your target population who are infeasible to survey for some reason. Some examples of key informants include a head of household completing a survey about family finances or an administrator completing a survey about staff morale on behalf of their employees. In this case, the survey respondent is a proxy , providing their best informed guess about the responses other people might have chosen if they were able to complete the survey independently. You are relying on an individual unit of observation (one person filling out a self-report questionnaire) and group or organization unit of analysis (the family or organization the researcher wants to make conclusions about). Proxies are commonly used when the target population is not capable of providing consent or appropriate answers, as in young children and people with disabilities.

Proxies are relying on their best judgment of another person’s experiences, and while that is valuable information, it may introduce bias and error into the research process. Student research projects, due to time and resource constraints, often include sampling people with second-hand knowledge, and this is simply one of many common limitations of their findings. Remember, every project has limitations. Social work researchers look for the most favorable choices in design and methodology, as there are no perfect projects. If you are planning to conduct a survey of people with second-hand knowledge of your topic, consider reworking your research question to be about something they have more direct knowledge about and can answer easily. One common missed opportunity I see is student researchers who want to understand client outcomes (unit of analysis) by surveying practitioners (unit of observation). If a practitioner has a caseload of 30 clients, it’s not really possible to answer a question like “how much progress have your clients made?” on a survey. Would they just average all 30 clients together? Instead, design a survey that asks them about their education, professional experience, and other things they know about first-hand. By making your unit of analysis and unit of observation the same, you can ensure the people completing your survey are able to provide informed answers.

Researchers may introduce measurement error if the person completing the questionnaire does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, many schools of social work market themselves based on the rankings of social work programs published by US News and World Report . Last updated in 2019, the methodology for these rankings is simply to send out a survey to deans, directors, and administrators at schools of social work. No graduation rates, teacher evaluations, licensure pass rates, accreditation data, or other considerations are a part of these rankings. It’s literally a popularity contest in which each school is asked to rank the others on a scale of 1-5, and ranked by highest average score. W hat if an informant is unfamiliar with a school or has a personal bias against a school? [2] This could significantly skew results. One might also question the validity of such a questionnaire in assessing something as important and economically impactful as the quality of social work education. We might envision how students might demand and create more authentic measures of school quality. 

In summary, survey design best fits with research projects that have the following attributes: 

  • Researchers plan to collect their own raw data, rather than secondary analysis of existing data.
  • Researchers have access to the most knowledgeable people (that you can feasibly and ethically sample) to complete the questionnaire.
  • Research question is best answered with quantitative methods.
  • Individuals are the unit of observation, and in many cases, the unit of analysis.
  • Researchers will try to observe things objectively and try not to influence participants to respond differently.
  • Research questions asks about indirect observables—things participants can self-report on a questionnaire.
  • There are valid, reliable, and commonly used scales (or other self-report measures) for the variables in the research question.

research design of a survey

Strengths of survey methods

Researchers employing survey research as a research design enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In a study by Blackstone (2013) [3] on older people’s experiences in the workplace , researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. We realize that $1,000 is nothing to sneeze at, but just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. Researchers can double, triple, or even quadruple their costs pretty quickly by opting for an in-person method of data collection over a mailed survey. Thus, surveys are relatively cost-effective.

Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 10 . When used with probability sampling approaches, survey research is the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group. Unfortunately, student projects are quite often not able to take advantage of the generalizability of surveys because they use availability sampling rather than the more costly and time-intensive random sampling approaches that are more likely to elicit a representative sample. While the conclusions drawn from availability samples have far less generalizability, surveys are still a great choice for student projects and they provide data that can be followed up on by well-funded researchers to generate generalizable research.

Survey research is particularly adept at investigating indirect observables . Indirect observables are things we have to ask someone to self-report because we cannot observe them directly, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviors (e.g., smoking or drinking), or factual information (e.g., income). Unlike qualitative studies in which these beliefs and attitudes would be detailed in unstructured conversations, surveys seek to systematize answers so researchers can make apples-to-apples comparisons across participants. Surveys are so flexible because you can ask about anything, and the variety of questions allows you to expand social science knowledge beyond what is naturally observable.

Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, as they are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 18 , do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.

The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. They can measure anything that people can self-report. Surveys are also appropriate for exploratory, descriptive, and explanatory research questions (though exploratory projects may benefit more from qualitative methods). Moreover, they can be delivered in a number of flexible ways, including via email, mail, text, and phone. We will describe the many ways to implement a survey later on in this chapter. 

In sum, the following are benefits of survey research:

  • Cost-effectiveness
  • Generalizability
  • Reliability
  • Versatility

research design of a survey

Weaknesses of survey methods

As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that you can ask any kind of question about any topic we want, once the survey is given to the first participant, there is nothing you can do to change the survey without biasing your results. Because surveys want to minimize the amount of influence that a researcher has on the participants, everyone gets the same questionnaire. Let’s say you mail a questionnaire out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their questionnaires. When conducting qualitative interviews or focus groups, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them. Survey researchers often ask colleagues, students, and others to pilot test their questionnaire and catch any errors prior to sending it to participants; however, once researchers distribute the survey to participants, there is little they can do to change anything.

Depth can also be a problem with surveys. Survey questions are standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not provide as detailed of an understanding as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” (Smith, 2009). [4] Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for an African American man, but only if that person was a conservative, moderate, anti-abortion, antiwar, etc. Then we would miss out on that additional detail when the participant responded “yes,” to our question. Of course, you could add a question to your survey about moderate vs. radical candidates, but could you do that for all of the relevant attributes of candidates for all people? Moreover, how do you know that moderate or antiwar means the same thing to everyone who participates in your survey? Without having a conversation with someone and asking them follow up questions, survey research can lack enough detail to understand how people truly think.

In sum, potential drawbacks to survey research include the following:

  • Inflexibility
  • Lack of depth
  • Problems specific to cross-sectional surveys, which we will address in the next section.

Secondary analysis of survey data

This chapter is designed to help you conduct your own survey, but that is not the only option for social work researchers. Look back to Chapter 2 and recall our discussion of secondary data analysis . As we talked about previously, using data collected by another researcher can have a number of benefits. Well-funded researchers have the resources to recruit a large representative sample and ensure their measures are valid and reliable prior to sending them to participants. Before you get too far into designing your own data collection, make sure there are no existing data sets out there that you can use to answer your question. We refer you to Chapter 2 for all full discussion of the strengths and challenges of using secondary analysis of survey data.

Key Takeaways

  • Strengths of survey research include its cost effectiveness, generalizability, variety, reliability, and versatility.
  • Weaknesses of survey research include inflexibility and lack of potential depth. There are also weaknesses specific to cross-sectional surveys, the most common type of survey.

If you are using quantitative methods in a student project, it is very likely that you are going to use survey design to collect your data.

  • Check to make sure that your research question and study fit best with survey design using the criteria in this section
  • Remind yourself of any limitations to generalizability based on your sampling frame.
  • Refresh your memory on the operational definitions you will use for your dependent and independent variables.

12.2 Collecting data using surveys

  • Distinguish between cross-sectional and longitudinal surveys
  • Identify the strengths and limitations of each approach to collecting survey data, including the timing of data collection and how the questionnaire is delivered to participants

As we discussed in the previous chapter, surveys are versatile and can be shaped and suited to most topics of inquiry. While that makes surveys a great research tool, it also means there are many options to consider when designing your survey. The two main considerations for designing surveys is how many times researchers will collect data from participants and how researchers contact participants and record responses to the questionnaire.

research design of a survey

Cross-sectional surveys: A snapshot in time

Think back to the last survey you took. Did you respond to the questionnaire once or did you respond to it multiple times over a long period? Cross-sectional surveys are administered only one time. Chances are the last survey you took was a cross-sectional survey—a one-shot measure of a sample using a questionnaire. And chances are if you are conducting a survey to collect data for your project, it will be cross-sectional simply because it is more feasible to collect data once than multiple times.

Let’s take a very recent example, the COVID-19 pandemic. Enriquez and colleagues (2021) [5] wanted to understand the impact of the pandemic on undocumented college students’ academic performance, attention to academics, financial stability, mental and physical health, and other factors. In cooperation with offices of undocumented student support at eighteen campuses in California, the researchers emailed undocumented students a few times from March through June of 2020 and asked them to participate in their survey via an online questionnaire. Their survey presents an compelling look at how COVID-19 worsened existing economic inequities in this population.

Strengths and weaknesses of cross-sectional surveys

Cross-sectional surveys are great. They take advantage of many of the strengths of survey design. They are easy to administer since you only need to measure your participants once, which makes them highly suitable for student projects. Keeping track of participants for multiple measures takes time and energy, two resources always under constraint in student projects. Conducting a cross-sectional survey simply requires collecting a sample of people and getting them to fill out your questionnaire—nothing more.

That convenience comes with a tradeoff. When you only measure people at one point in time, you can miss a lot. The events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain the same over time. Because nomothetic causal explanations seek a general, universal truth, surveys conducted a decade ago do not represent what people think and feel today or twenty years ago. In student research projects, this weakness is often compounded by the use of availability sampling, which further limits the generalizability of the results in student research projects to other places and times beyond the sample collected by the researcher. Imagine generalizing results on the use of telehealth in social work prior to the COVID-19 pandemic or managers’ willingness to allow employees to telecommute. Both as a result of shocks to the system—like COVID-19—and the linear progression of cultural, economic and social change—like human rights movements—cross-sectional surveys can never truly give us a timeless causal explanation. In our example about undocumented students during COVID-19, you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey or describe patterns that go back far in time.

Of course, just as society changes over time, so do people. Because cross-sectional surveys only measure people at one point in time, they have difficulty establishing cause-and-effect relationships for individuals because they cannot clearly establish whether the cause came before the effect. If your research question were about how school discipline (our independent variable) impacts substance use (our dependent variable), you would want to make that any changes in our dependent variable, substance use, came  after changes in school discipline. That is, if your hypothesis is that says school discipline causes increases in substance use, you must establish that school discipline came first and increases in substance use came afterwards. However, it is perhaps just as likely that increased substance use might cause increases in school discipline. If you sent a cross-sectional survey to students asking them about their substance use and disciplinary record, you would get back something like “tried drugs or alcohol 6 times” and “has been suspended 5 times.” You could see whether similar patterns existed in other students, but you wouldn’t be able to tell which was the cause or the effect.

Because of these limitations, cross-sectional surveys are limited in how well they can establish whether a nomothetic causal relationship is true or not. Surveys are still a key part of establishing causality. But they need additional help and support to make causal arguments. That might come from combining data across surveys in meta-analyses and systematic reviews, integrating survey findings with theories that explain causal relationships among variables in the study, as well as corroboration from research using other designs, theories, and paradigms. Scientists can establish causal explanations, in part, based on survey research. However, in keeping with the assumptions of postpositivism, the picture of reality that emerges from survey research is only our best approximation of what is objectively true about human beings in the social world. Science requires a multi-disciplinary conversation among scholars to continually improve our understanding.

research design of a survey

Longitudinal surveys: Measuring change over time

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey . Longitudinal surveys enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys, which fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey . The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time researchers gather data, they survey different people from the identified group because they are interested in the trends of the whole group, rather than changes in specific individuals. Let’s look at an example.

The Monitoring the Future Study is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, the NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. Perhaps surprisingly, fewer high school children reported using alcohol in the past month than at any point over the last 20 years—a fact that often surprises people because it cuts against the stereotype of adolescents engaging in ever-riskier behaviors. Nevertheless, recent data also reflected an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. By tracking these data points over time, we can better target substance abuse prevention programs towards the current issues facing the high school population.

Unlike trend surveys, panel surveys require the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year, for 5 years in a row. Keeping track of where respondents live, when they move, and when they change phone numbers takes resources that researchers often don’t have. However, when the researchers do have the resources to carry out a panel survey, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). [6] Contrary to popular beliefs about the impact of work on adolescents’ school performance and transition to adulthood, work increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that researchers study include people of particular generations or people born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common. An example of this sort of research can be seen in Lindert and colleagues (2020) [7] work on healthy aging in men . Their article is a secondary analysis of longitudinal data collected as part of the Veterans Affairs Normative Aging Study conducted in 1985, 1988, and 1991.

research design of a survey

Strengths and weaknesses of longitudinal surveys

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. Whether a major world event takes place or participants mature, researchers can effectively capture the subsequent potential changes in the phenomenon or behavior of interest. This is the key strength of longitudinal surveys—their ability to establish temporality needed for nomothetic causal explanations. Whether your project investigates changes in society, communities, or individuals, longitudinal designs improve on cross-sectional designs by providing data at multiple points in time that better establish causality.

Of course, all of that extra data comes at a high cost. If a panel survey takes place over ten years, the research team must keep track of every individual in the study for those ten years, ensuring they have current contact information for their sample the whole time. Consider this study which followed people convicted of driving under the influence of drugs or alcohol (Kleschinsky et al., 2009). [8] It took an average of 8.6 contacts for participants to complete follow-up surveys, and while this was a difficult-to-reach population, researchers engaging in longitudinal research must prepare for considerable time and expense in tracking participants. Keeping in touch with a participant for a prolonged period of time likely requires building participant motivation to stay in the study, maintaining contact at regular intervals, and providing monetary compensation. Panel studies are not the only costly longitudinal design. Trend studies need to recruit a new sample every time they collect a new wave of data at additional cost and time.

In my years as a research methods instructor, I have never seen a longitudinal survey design used in a student research project because students do not have enough time to complete them. Cross-sectional surveys are simply the most convenient and feasible option. Nevertheless, social work researchers with more time to complete their studies use longitudinal surveys to understand causal relationships that they cannot manipulate themselves. A researcher could not ethically experiment on participants by assigning a jail sentence or relapse, but longitudinal surveys allow us to systematically investigate such sensitive phenomena ethically. Indeed, because longitudinal surveys observe people in everyday life, outside of the artificial environment of the laboratory (as in experiments), the generalizability of longitudinal survey results to real-world situations may make them superior to experiments, in some cases.

Table 12.1 summarizes these three types of longitudinal surveys.

Table 12.1 Types of longitudinal surveys
Trend Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once.
Panel Researcher surveys the exact same sample several times over a period of time.
Cohort Researcher identifies a defining characteristic and then regularly surveys people who have that characteristic.

Retrospective surveys: Good, but not the best of both worlds

Retrospective surveys try to strike a middle ground between the two types of surveys. They are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, data are collected only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine that you are participating in a survey that asks you to respond to questions about your feelings on Valentine’s Day. As last Valentine’s Day can’t be more than 12 months ago, there is a good chance that you are able to provide a pretty accurate response of how you felt. Now let’s imagine that the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so the survey asks you to report on the preceding six Valentine’s Days. How likely is it that you will remember how you felt at each one? Will your responses be as accurate as they might have been if your data were collected via survey once a year rather reporting the past few years today? The main limitation with retrospective surveys are that they are not as reliable as cross-section or longitudinal surveys. That said, retrospective surveys are a feasible way to collect longitudinal data when the researcher only has access to the population once, and for this reason, they may be worth the drawback of greater risk of bias and error in the measurement process.

Because quantitative research seeks to build nomothetic causal explanations, it is important to determine the order in which things happen. When using survey design to investigate causal relationships between variables in a research question, longitudinal surveys are certainly preferable because they can track changes over time and therefore provide stronger evidence for cause-and-effect relationships. As we discussed, the time and cost required to administer a longitudinal survey can be prohibitive, and most survey research in the scholarly literature is cross-sectional because it is more feasible to collect data once. Well designed cross-sectional surveys provide can provide important evidence for a causal relationship, even if it is imperfect. Once you decide how many times you will collect data from your participants, the next step is to figure out how to get your questionnaire in front of participants.

research design of a survey

Self-administered questionnaires

If you are planning to conduct a survey for your research project, chances are you have thought about how you might deliver your survey to participants. If you don’t have a clear picture yet, look back at your work from Chapter 11 on the sampling approach for your project. How are you planning to recruit participants from your sampling frame? If you are considering contacting potential participants via phone or email, perhaps you want to collect your data using a phone or email survey attached to your recruitment materials. If you are planning to collect data from students, colleagues, or other people you most commonly interact with in-person, maybe you want to consider a pen-and-paper survey to collect your data conveniently. As you review the different approaches to administering surveys below, consider how each one matches with your sampling approach and the contact information you have for study participants. Ensure that your sampling approach is feasible conduct before building your survey design from it. For example, if you are planning to administer an online survey, make sure you have email addresses to send your questionnaire or permission to post your survey to an online forum.

Surveys are a versatile research approach. Survey designs vary not only in terms of when they are administered but also in terms of how they are administered. One common way to collect data is in the form of self-administered questionnaires . Self-administered means that the research participant completes the questions independently, usually in writing. Paper questionnaires can be delivered to participants via mail or in person whenever you see your participants. Generally, student projects use in-person collection of paper questionnaires, as mail surveys require physical addresses, spending money, and waiting for the mail. It is common for academic researchers to administer surveys in large social science classes, so perhaps you have taken a survey that was given to you in-person during undergraduate classes. These professors were taking advantage of the same convenience sampling approach that student projects often do. If everyone in your sampling frame is in one room, going into that room and giving them a quick paper survey to fill out is a feasible and convenient way to collect data. Availability sampling may involve asking your sampling frame to complete your study during when they naturally meet—colleagues at a staff meeting, students in the student lounge, professors in a faculty meeting—and self-administered questionnaires are one way to take advantage of this natural grouping of your target population. Try to pick a time and situation when people have the downtime needed to complete your questionnaire, and you can maximize the likelihood that people will participate in your in-person survey. Of course, this convenience may come at the cost of privacy and confidentiality. If your survey addresses sensitive topics, participants may alter their responses because they are in close proximity to other participants while they complete the survey. Regardless of whether participants feel self-conscious or talk about their answers with one another, by potentially altering the participants’ honest response you may have introduced bias or error into your measurement of the variables in your research question.

Because student research projects often rely on availability sampling, collecting data using paper surveys from whoever in your sampling frame is convenient makes sense because the results will be of limited generalizability. But for researchers who aim to generalize (and students who want to publish their study!), self-administered surveys may be better distributed via the mail or electronically. While is very unusual for a student project to send a questionnaire via the mail, this method is used quite often in the scholarly literature and for good reason. Survey researchers who deliver their surveys via postal mail often provide some advance notice to respondents about the survey to get people thinking and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope. These are also effective for other types of surveys.

While snail mail may not be feasible for student project, it is increasingly common for student projects and social science projects to use email and other modes of online delivery like social media to collect responses to a questionnaire. Researchers like online delivery for many reasons. It’s quicker than knocking on doors in a neighborhood for an in-person survey or waiting for mailed surveys to be returned. It’s cheap, too. There are many free tools like Google Forms and Survey Monkey (which includes a premium option). While you are affiliated with a university, you may have access to commercial research software like Redcap or Qualtrics which provide much more advanced tools for collecting survey data than free options. Online surveys can take advantage of the advantages of computer-mediated data collection by playing a video before asking a question, tracking how long participants take to answer each question, and making sure participants don’t fill out the survey more than once (to name a few examples. Moreover, survey data collected via online forms can be exported for analysis in spreadsheet software like Google Sheets or Microsoft Excel or statistics software like SPSS or JASP , a free and open-source alternative to SPSS. While the exported data still need to be checked before analysis, online distribution saves you the trouble of manually inputting every response a participant writes down on a paper survey into a computer to analyze.

The process of collecting data online depends on your sampling frame and approach to recruitment. If your project plans to reach out to people via email to ask them to participate in your study, you should attach your survey to your recruitment email. You already have their attention, and you may not get it again (even if you remind them). Think pragmatically. You will need access to the email addresses of people in your sampling frame. You may be able to piece together a list of email addresses based on public information (e.g., faculty email addresses are on their university webpage, practitioner emails are in marketing materials). In other cases, you may know of a pre-existing list of email addresses to which your target population subscribes (e.g., all undergraduate students in a social work program, all therapists at an agency), and you will need to gain the permission of the list’s administrator recruit using the email platform. Other projects will identify an online forum in which their target population congregates and recruit participants there. For example, your project might identify a Facebook group used by students in your social work program or practitioners in your local area to distribute your survey. Of course, you can post a survey to your personal social media account (or one you create for the survey), but depending on your question, you will need a detailed plan on how to reach participants with enough relevant knowledge about your topic to provide informed answers to your questionnaire.

Many of the suggestions that were provided earlier to improve the response rate of hard copy questionnaires also apply to online questionnaires, including the development of an attractive survey and sending reminder emails. One challenge not present in mail surveys is the spam filter or junk mail box. While people will at least glance at recruitment materials send via mail, email programs may automatically filter out recruitment emails so participants never see them at all. While the financial incentives that can be provided online differ from those that can be given in person or by mail, online survey researchers can still offer completion incentives to their respondents. Over the years, I’ve taken numerous online surveys. Often, they did not come with any incentive other than the joy of knowing that I’d helped a fellow social scientist do their job. However, some surveys have their perks. One survey offered a coupon code to use for $30 off any order at a major online retailer and another allowed the opportunity to be entered into a lottery with other study participants to win a larger gift, such as a $50 gift card or a tablet computer. Student projects should not pay participants unless they have grant funding to cover that cost, and there should be no expectations of any out-of-pocket costs for students to complete their research project.

One area in which online surveys are less suitable than mail or in-person surveys is when your target population includes individuals with limited, unreliable, or no access to the internet or individuals with limited computer skills. For these groups, an online survey is inaccessible. At the same time, online surveys offer the most feasible way to collect data anonymously. By posting recruitment materials to a Facebook group or list of practitioners at an agency, you can avoid collecting identifying information from people who participated in your study. For studies that address sensitive topics, online surveys also offer the opportunity to complete the survey privately (again, assuming participants have access to a phone or personal computer). If you have the person’s email address, physical address, or met them in-person, your participants are not anonymous, but if you need to collect data anonymously, online tools offer a feasible way to do so.

The best way to collect data using self-administered questionnaires depends on numerous factors. The strengths and weaknesses of in-person, mail, and electronic self-administered surveys are reviewed in Table 12.2. Ultimately, you must make the best decision based on its congruence with your sampling approach and what you can feasibly do. Decisions about survey design should be done with a deep appreciation for your study’s target population and how your design choices may impact their responses to your survey.

Table 12.2 Strengths and weaknesses of delivery methods for self-administered questionnaires
: it’s easy if your participants congregate in an accessible location; but costly to go door-to-door to collect surveys : it’s too expensive for unfunded projects but a cost-effective option for funded projects : it’s free and easy to use online survey tools
: it’s easy if your participants congregate in an accessible location; but time-consuming to go door-to-door to collect surveys : it can take a while for mail to travel : delivery is instantaneous
: it can be harder to ignore someone in person : it is easy to ignore junk mail, solicitations : it’s easy to ignore junk mail; spam filter may block you
: it is very difficult to provide anonymity and people may have to respond in a public place, rather than privately in a safe place : it cannot provide true anonymity as other household members may see participants’ mail, but people can likely respond privately in a safe place : can collect data anonymously and respond privately in a safe place
: by going where your participants already gather, you increase your likelihood of getting responses : it reaches those without internet, but misses those who change addresses often (e.g., college students) : it misses those who change phone or emails often or don’t use the internet; but reaches online communities
: paper questionnaires are not interactive : paper questionnaires are not interactive : electronic questionnaires can include multimedia elements, interactive questions and response options
: researcher inputs data manually : researcher inputs data manually : survey software inputs data automatically

research design of a survey

Quantitative interviews: Researcher-administered questionnaires

There are some cases in which it is not feasible to provide a written questionnaire to participants, either on paper or digitally. In this case, the questionnaire can be administered verbally by the researcher to respondents. Rather than the participant reading questions independently on paper or digital screen, the researcher reads questions and answer choices aloud to participants and records their responses for analysis. Another word for this kind of questionnaire is an interview schedule . It’s called a schedule because each question and answer is posed in the exact same way each time.

Consistency is key in quantitative interviews . By presenting each question and answer option in exactly the same manner to each interviewee, the researcher minimizes the potential for the interviewer effect , which encompasses any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options. Additionally, in-person surveys may be video recorded and you can typically take notes without distracting the interviewee due to the closed-ended nature of survey questions, making them helpful for identifying how participants respond to the survey or which questions might be confusing.

Quantitative interviews can take place over the phone or in-person. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers verbally pose questions to participants. For many years, live-caller polls (a live human being calling participants in a phone survey) were the gold-standard in political polling. Indeed, phone surveys were excellent for drawing representative samples prior to mobile phones. Unlike landlines, cell phone numbers are portable across carriers, associated with individuals as opposed to households, and do not change their first three numbers when people move to a new geographical area. For this reason, many political pollsters have moved away from random-digit phone dialing and toward a mix of data collection strategies like texting-based surveys or online panels to recruit a representative sample and generalizable results for the target population (Silver, 2021). [9]

I guess I should admit that I often decline to participate in phone studies when I am called. In my defense, it’s usually just a customer service survey! My point is that it is easy and even socially acceptable to abruptly hang up on an unwanted caller asking you to participate in a survey, and given the high incidence of spam calls, many people do not pick up the phone for numbers they do not know. We will discuss response rates in greater detail at the end of the chapter. One of the benefits of phone surveys is that a person can complete them in their home or a safe place. At the same time, a distracted participant who is cooking dinner, tending to children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. When administering a phone survey, the researcher can record responses on a paper questionnaire or directly into a computer program. For large projects in which many interviews must be conducted by research staff, computer-assisted telephone interviewing (CATI) ensures that each question and answer option are presented the same way and input into the computer for analysis. For student projects, you can read from a digital or paper copy of your questionnaire and record participants responses into a spreadsheet program like Excel or Google Sheets.

Interview schedules must be administered in such a way that the researcher asks the same question the same way each time. While questions on self-administered questionnaires may create an impression based on the way they are presented, having a researcher pose the questions verbally introduces additional variables that might influence a respondent. Controlling one’s wording, tone of voice, and pacing can be difficult over the phone, but it is even more challenging in-person because the researcher must also control their non-verbal expressions and behaviors that may bias survey respondents. Even a slight shift in emphasis or wording may bias the respondent to answer differently. As we’ve mentioned earlier, consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. But what happens if a participant asks a question of the researcher? Unlike self-administered questionnaires, quantitative interviews allow the participant to speak directly with the researcher if they need more information about a question. While this can help participants respond accurately, it can also introduce inconsistencies between how the survey administered to each participant. Ideally, the researcher should draft sample responses researchers might provide to participants if they are confused on certain survey items. The strengths and weaknesses of phone and in-person quantitative interviews are summarized in Table 12.3 below.

Table 12.3 Strengths and weaknesses of delivery methods for quantitative interviews
: it’s easy if your participants congregate in an accessible location; but costly to go door-to-door to collect surveys : phone calls are free or low-cost
quantitative interviews take a long time because each question must be read aloud to each participant quantitative interviews take a long time because each question must be read aloud to each participant
: it can be harder to ignore someone in person : it is easy to ignore unwanted or unexpected calls
: it is very difficult to provide anonymity and people will have to respond in a public place, rather than privately in a safe place : it is difficult for the researcher to control the context in which the participant responds, which might be private or public, safe or unsafe
: by going where your participants already gather, you increase your likelihood of getting responses : it is easy to ignore unwanted or unexpected calls
: interview schedules are kept simple because questions are read aloud : interview schedules are kept simple because questions are read aloud
: researcher inputs data manually : researcher inputs data manually

Students using survey design should settle on a delivery method that presents the most favorable tradeoff between strengths and challenges for their unique context. One key consideration is your sampling approach. If you already have the participant on the phone and they agree to be a part of your sample…you may as well ask them your survey questions right then if the participant can do so. These feasibility concerns make in-person quantitative interviews a poor fit for student projects. It is far easier and quicker to distribute paper surveys to a group of people it is to administer the survey verbally to each participant individually. Ultimately, you are the one who has to carry out your research design. Make sure you can actually follow your plan!

  • Time is a factor in determining what type of survey a researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are at multiple points in time.
  • Retrospective surveys offer some of the benefits of longitudinal research while only collecting data once but may be less reliable.
  • Self-administered questionnaires may be delivered in-person, online, or via mail.
  • Interview schedules are used with in-person or phone surveys (a.k.a. quantitative interviews).
  • Each way to administer surveys comes with benefits and drawbacks.

In this section, we assume that you are using a cross-sectional survey design. But how will you deliver your survey? Recall your sampling approach you developed in Chapter 10 . Consider the following questions when evaluating delivery methods for surveys.

  • Can you attach your survey to your recruitment emails, calls, or other contacts with potential participants?
  • What contact information (e.g., phone number, email address) do you need to deliver your survey?
  • Do you need to maintain participant anonymity?
  • Is there anything unique about your target population or sampling frame that may impact survey research?

Imagine you are a participant in your survey.

  • Beginning with the first contact for recruitment into your study and ending with a completed survey, describe each step of the data collection process from the perspective of a person responding to your survey. You should be able to provide a pretty clear timeline of how your survey will proceed at this point, even if some of the details eventually change

12.3 Writing effective questions and questionnaires

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous section, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this section, consider how your method of delivery impacts the type of questionnaire you will design. Because most student projects use paper or online surveys, this section will detail how to construct self-administered questionnaires to minimize the potential for bias and error.

research design of a survey

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11 . You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
  • Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report. As we will discuss in the final section of this chapter, triangulating across data sources (e.g., measuring variables using client files or student records) can avoid some of the common sources of bias in survey research.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.

research design of a survey

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 12.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

example of filter question, with a yes answer meaning you had to answer more questions

Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 12.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused. A similar issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 12.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking more than one thing at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11 .)

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [10] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  •   Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Let’s complete a first draft of your questions. In the previous exercise, you listed all of the questions and answers you will use to measure the variables in your research question. 

  • In the previous exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let’s revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 12.2 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
Table 12.2 The BRUSO model of writing effective questionnaire items, with examples from a perceptions of gun ownership questionnaire
“Are you now or have you ever been the possessor of a firearm?” Have you ever possessed a firearm?
“Who did you vote for in the last election?” Note: Only include items that are relevant to your study.
“Are you a gun person?” Do you currently own a gun?”
How much have you read about the new gun control measure and sales tax?” “How much have you read about the new sales tax on firearm purchases?”
“How much do you support the beneficial new gun control measure?” “What is your view of the new gun control measure?”

research design of a survey

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. We will address qualitative data analysis in greater detail in Chapter 19 .

To keep things simple, we encourage you to use only closed-ended response options in your study. While open-ended questions are not wrong, they are often a sign in our classrooms that students have not thought through all the way how to operationally define and measure their key variables. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. Instead, you will need to analyze the qualitative data using one of the techniques we discuss in Chapter 19 to interpret your participants’ responses.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 12.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 12.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 12.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It is interesting to note that even though participants inis an ordinal level of measurement. Whereas, Pew asks four questions that use an interval or ratio level of measurement (depending on the question). This means that respondents provide numerical responses, rather than choosing categories like lower, middle, and upper class. It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to match the level of measurement matches with the statistical test you’ve chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.
  • Look ahead to Chapter 14 and consider how each item on your questionnaire will inform your data analysis plan.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

research design of a survey

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [11] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6 . Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 12.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [12] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very free time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you find ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [13] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [14] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.

research design of a survey

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your professor or the IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

12.4 Bias and cultural considerations

  • Identify the logic behind survey design as it relates to nomothetic causal explanations and quantitative methods.
  • Discuss sources of bias and error in surveys.
  • Apply criticisms of survey design to ensure more equitable research.

The logic of survey design

As you may have noticed with survey designs, everything about them is intentional—from the delivery method, to question wording, to what response options are offered. It’s helpful to spell out the underlying logic behind survey design and how well it meets the criteria for nomothetic causal explanations. Because we are trying to isolate the causal relationship between our dependent and independent variable, we must try to control for as many possible confounding factors as possible. Researchers using survey design do this in multiple ways:

  • Using well-established, valid, and reliable measures of key variables, including triangulating variables using multiple measures
  • Measuring control variables and including them in their statistical analysis
  • Avoiding biased wording, presentation, or procedures that might influence the sample to respond differently
  • Pilot testing questionnaires, preferably with people similar to the sample

In other words, survey researchers go through a lot of trouble to make sure they are not the ones causing the changes they observe in their study. Of course, every study falls a little short of this ideal bias-free design, and some studies fall far short of it. This section is all about how bias and error can inhibit the ability of survey results to meaningfully tell us about causal relationships in the real world.

Bias in questionnaires, questions, and response options

The use of surveys is based on methodological assumptions common to research in the postpositivist paradigm. Figure 12.5 presents a model the methodological assumptions behind survey design—what researchers assume is the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996) . [15] Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary.

research design of a survey

Consider, for example, the following questionnaire item:

  • How many alcoholic drinks do you consume in a typical day?
  • a lot more than average
  • somewhat more than average
  • somewhat fewer than average
  • a lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both . Even though Chang and Krosnick (2003) [16] found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days) .

Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this  mental calculation  might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that  for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.

At first glance, this question is clearly worded and includes a set of mutually exclusive, exhaustive, and balanced response options. However, it is difficult to follow the logic of what is truly being asked. Again, this complexity can lead to unintended influences on respondents’ answers. Confounds like this are often referred to as context effects   because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990) . [17] For example, there is an  item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988) . [18] When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999) . [19] For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options (i.e., fence-sitting). For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.  To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first! [20]

Other context effects that can confound the causal relationship under examination in a survey include social desirability bias, recall bias, and common method bias. As we discussed in Chapter 11 , s ocial desirability bias occurs when we create questions that lead respondents to answer in ways that don’t reflect their genuine thoughts or feelings to avoid being perceived negatively. With negative questions such as, “do you think that your project team is dysfunctional?”, “is there a lot of office politics in your workplace?”, or “have you ever illegally downloaded music files from the Internet?”, the researcher may not get truthful responses. This tendency among respondents to “spin the truth” in order to portray themselves in a socially desirable manner is called social desirability bias, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming social desirability bias in a questionnaire survey outside of wording questions using nonjudgmental language. However, in a quantitative interview, a researcher may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

As you can see, participants’ responses to survey questions often depend on their motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviors, or perhaps their memory of such events may have evolved with time and are no longer retrievable. This phenomenon is know as recall bias . For instance, if a respondent is asked to describe their utilization of computer technology one year ago, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Cross-sectional and retrospective surveys are particularly vulnerable to recall bias as well as common method bias. Common method bias can occur when measuring both independent and dependent variables at the same time (like a cross-section survey) and using the same instrument (like a questionnaire). In such cases, the phenomenon under investigation may not be adequately separated from measurement artifacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff et al. 2003), [21] , Lindell and Whitney’s (2001) [22] market variable technique, and so forth. This bias can be potentially avoided if the independent and dependent variables are measured at different points in time, using a longitudinal survey design, or if these variables are measured using different data sources, such as medical or student records rather than self-report questionnaires.

research design of a survey

Bias in recruitment and response to surveys

So far, we have discussed errors that researchers make when they design questionnaires that accidentally influence participants to respond one way or another. However, even well designed questionnaires can produce biased results when administered to survey respondents because of the biases in who actually responds to your survey.

Survey research is notorious for its low response rates. A response rate of 15-20% is typical in a mail survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity and generalizability of the study’s results, especially as this relates to the representativeness of the sample. This is known as non-response bias . For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to satisfaction questionnaires. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. [23] In this instance, the results would not be generalizable beyond this one biased sample. Here are several strategies for addressing non-response bias:

  • Advance notification: A short letter sent in advance to the targeted respondents soliciting their participation in an upcoming survey can prepare them and improve likelihood of response. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their cooperation. A variation of this technique may request the respondent to return a postage-paid postcard indicating whether or not they are willing to participate in the study.
  • Ensuring that content is relevant: If a survey examines issues of relevance or importance to respondents, then they are more likely to respond.
  • Creating a respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, inoffensive, and easy to respond to tend to get higher response rates.
  • Having the project endorsed: For organizational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organization. Such endorsements can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.
  • Providing follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.
  • Ensuring that interviewers are properly trained: Response rates for interviews can be improved with skilled interviewers trained on how to request interviews, use computerized dialing techniques to identify potential respondents, and schedule callbacks for respondents who could not be reached.
  • Providing incentives: Response rates, at least with certain populations, may increase with the use of incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, the promise of contribution to charity, and so forth.
  • Providing non-monetary incentives: Organizations in particular are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive sharing trainings and other resources based on the results of a project with a key stakeholder.
  • Making participants fully aware of confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates.

Nonresponse bias impairs the ability of the researcher to generalize from the total number of respondents in the sample to the overall sampling frame. Of course, this assumes that the sampling frame is itself representative and generalizable to the larger target population.  Sampling bias is present when the people in our sampling frame or the approach we use to sample them results in a sample that does not represent our population in some way. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and will include a disproportionate number of respondents who have land-line telephone service and stay home during much of the day, such as people who are unemployed, disabled, or of advanced age. Likewise, online surveys tend to include a disproportionate number of students and younger people who are more digitally connected, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. A different kind of sampling bias relates to generalizing from key informants to a target population, such as asking teachers (or parents) about the academic learning of their students (or children) or asking CEOs about operational details in their company. These sampling frames may provide a clearer picture of what key informants think and feel, rather than the target population.

research design of a survey

Cultural bias

The acknowledgement that most research in social work and other adjacent fields is overwhelmingly based on so-called WEIRD (Western, educated, industrialized, rich and democratic) populations—a topic we discussed in Chapter 10 —has given way to intensified research funding, publication, and visibility of collaborative cross-cultural studies across the social sciences that expand the geographical range of study populations. Many of the so-called non-WEIRD communities who increasingly participate in research are Indigenous, from low- and middle-income countries in the global South, live in post-colonial contexts, and/or are marginalized within their political systems, revealing and reproducing power differentials between researchers and researched (Whiteford & Trotter, 2008). [24] Cross-cultural research has historically been rooted in racist, capitalist ideas and motivations (Gordon, 1991). [25] Scholars have long debated whether research aiming to standardize cross-cultural measurements and analysis is tacitly engaged and/or continues to be rooted in colonial and imperialist practices (Kline et al., 2018; Stearman, 1984). [26] Given this history, it is critical that scientists reflect upon these issues and be accountable to their participants and colleagues for their research practices. We argue that cross-cultural research be grounded in the recognition of the historical, political, sociological and cultural forces acting on the communities and individuals of focus. These perspectives are often contrasted with ‘science’; here we argue that they are necessary as a foundation for the study of human behavior.

We stress that our goal is not to review the literature on colonial or neo-colonial research practices, to provide a comprehensive primer on decolonizing approaches to field research, nor to identify or admonish past harms in these respects—harms to which many of the authors of this piece would readily admit. Furthermore, we acknowledge that we ourselves are writing from a place of privilege as researchers educated and trained in disciplines with colonial pasts. Our goal is simply to help students understand the broader issues in cross-cultural studies for appropriate consideration of diverse communities and culturally appropriate methodologies for student research projects.

Equivalence of measures across cultures

Data collection methods largely stemming from WEIRD intellectual traditions are being exported to a range of cultural contexts. This is often done with insufficient consideration of the translatability (e.g. equivalence or applicability) or implementation of such concepts and methods in different contexts, as already well documented (e.g., Hruschka et al., 2018). [27] For example, in a developmental psychology study conducted by Broesch and colleagues (2011), [28] the research team exported a task to examine the development and variability of self-recognition in children across cultures. Typically, this milestone is measured by surreptitiously placing a mark on a child’s forehead and allowing them to discover their reflective image and the mark in a mirror. While self-recognition in WEIRD contexts typically manifests in children by 18 months of age, the authors tested found that only 2 out of 82 children (aged 1–6 years) ‘passed’ the test by removing the mark using the reflected image. The authors’ interpretation of these results was that the test produced false negatives and instead measured implicit compliance to the local authority figure who placed the mark on the child. This raises the possibility that the mirror test may lack construct validity in cross-cultural contexts—in other words, that it may not measure the theoretical construct it was designed to measure.

As we discussed previously, survey researchers want to make sure everyone receives the same questionnaire, but how can we be sure everyone understands the questionnaire in the same way? Cultural equivalence means that a measure produces comparable data when employed in different cultural populations (Van de Vijver & Poortinga, 1992). [29] If concepts differ in meaning across cultures, cultural bias may better explain what is going on with your key variables better than your hypotheses. Cultural bias may result because of poor item translation, inappropriate content of items, and unstandardized procedures (Waltz et al., 2010). [30] Of particular importance is construct bias , or “when the construct measured is not identical across cultures or when behaviors that characterize the construct are not identical across cultures” (Meiring et al., 2005, p. 2) [31] Construct bias emerges when there is: a) disagreement about the appropriateness of content, b) inadequate sampling, c) underrepresentation of the construct, and d) incomplete overlap of the construct across cultures (Van de Vijver & Poortinga, 1992). [32]

research design of a survey

Addressing cultural bias

To address these issues, we propose that careful scrutiny of (a) study site selection, (b) community involvement and (c) culturally appropriate research methods. Particularly for those initiating collaborative cross-cultural projects, we focus here on pragmatic and implementable steps. For student researchers, it is important to be aware of these issues and assess for them in the strengths and limitations of your own study, though the degree to which you can feasibly implement some of these measures will be impaired by a lack of resources.

Study site selection

Researchers are increasingly interested in cross-cultural research applicable outside of WEIRD contexts., but this has sometimes led to an uncritical and haphazard inclusion of ‘non-WEIRD’ populations in cross-cultural research without further regard for why specific populations should be included (Barrett, 2020). [33] One particularly egregious example is the grouping of all non-Western populations as a comparative sample to the cultural West (i.e. the ‘West versus rest’ approach) is often unwittingly adopted by researchers performing cross-cultural research (Henrich, 2010). [34] Other researcher errors include the exoticization of particular cultures or viewing non-Western cultures as a window into the past rather than cultures that have co-evolved over time.

Thus, some of the cultural biases in survey research emerge when researchers fail to identify a clear  theoretical justification for inclusion of any subpopulation—WEIRD or not—based on knowledge of the relevant cultural and/or environmental context (see Tucker, 2017 [35] for a good example). For example, a researcher asking about satisfaction with daycare must acquire the relevant cultural and environmental knowledge about a daycare that caters exclusively to Orthodox Jewish families. Simply including this study site without doing appropriate background research and identifying a specific aspect of this cultural group that is of theoretical interest in your study (e.g., spirituality and parenthood) indicates a lack of rigor in research. It undercuts the validity and generalizability of your findings by introducing sources of cultural bias that are unexamined in your study.

Sampling decisions are also important as they involve unique ethical and social challenges. For example, foreign researchers (as sources of power, information and resources) represent both opportunities for and threats to community members. These relationships are often complicated by power differentials due to unequal access to wealth, education and historical legacies of colonization. As such, it is important that investigators are alert to the possible bias among individuals who initially interact with researchers, to the potential negative consequences for those excluded, and to the (often unspoken) power dynamics between the researcher and their study participants (as well as among and between study participants).

We suggest that a necessary first step is to carefully consult existing resources outlining best practices for ethical principles of research before engaging in cross-cultural research. Many of these resources have been developed over years of dialogue in various academic and professional societies (e.g. American Anthropological Association, International Association for Cross Cultural Psychology, International Union of Psychological Science). Furthermore, communities themselves are developing and launching research-based codes of ethics and providing carefully curated open-access materials such as those from the Indigenous Peoples’ Health Research Centre , often written in consultation with ethicists in low- to middle-income countries (see Schroeder et al., 2019 ). [36]

Community involvement

Too often researchers engage in ‘extractive’ research, whereby a researcher selects a study community and collects the necessary data to exclusively further their own scientific and/or professional goals without benefiting the community. This reflects a long history of colonialism in social science. Extractive methods lead to methodological flaws and alienate participants from the scientific process, poisoning the well of scientific knowledge on a macro level. Many researchers are associated with institutions tainted with colonial, racist and sexist histories, sentiments and in some instances perpetuating into the present. Much cross-cultural research is carried out in former or contemporary colonies, and in the colonial language. Explicit and implicit power differentials create ethical challenges that can be acknowledged by researchers and in the design of their study (see Schuller, 2010 [37] for an example in which the power and politics of various roles played by researchers).

An understanding of cultural norms may ensure that data collection and questionnaire design are culturally and linguistically relevant. This can be achieved by implementing several complementary strategies. A first step may be to collaborate with members of the study community to check the relevance of the instruments being used. Incorporating perspectives from the study community from the outset can reduce the likelihood of making scientific errors in measurement and inference (First Nations Information Governance Centre, 2014). [38]

An additional approach is to use mixed methods in data collection, such that each method ‘checks’ the data collected using the other methods. A recent paper by Fisher and Poortinga (2018) [39] provides suggestions for a rigorous methodological approach to conducting cross-cultural comparative psychology, underscoring the importance of using multiple methods with an eye towards a convergence of evidence. A mixed-method approach can incorporate a variety of qualitative methods over and on top of a quantitative survey including open-ended questions, focus groups, and interviews.

Research design and methods

It is critical that researchers translate the language, technological references and stimuli as well as examine the underlying cultural context of the original method for assumptions that rely upon WEIRD epistemologies (Hrushcka, 2020). [40] This extends to non-complex visual aids, attempting to ensure that even scales measure what the researcher is intending (see Purzycki and Lang, 2019 [41] for discussion on the use of a popular economic experiment in small-scale societies).

For more information on assessing cultural equivalence, consult this free training from RTI International, a well-regarded non-profit research firm, entitled “ The essential role of language in survey design ” and this free training from the Center for Capacity Building in Survey Methods and Statistics entitled “ Questionnaire design: For surveys in 3MC (multinational, multiregional, and multi cultural) contexts . These trainings guide researchers using survey design through the details of evaluating and writing survey questions using culturally sensitive language. Moreover, if you are planning to conduct cross-cultural research, you should consult this guide for assessing measurement equivalency and bias across cultures , as well.

  • Bias can come from both how questionnaire items are presented to participants as well as how participants are recruited and respond to surveys.
  • Cultural bias emerges from the differences in how people think and behave across cultures.
  • Cross-cultural research requires a theoretically-informed sampling approach, evaluating measurement equivalency across cultures, and generalizing findings with caution.

Review your questionnaire and assess it for potential sources of bias.

  • Include the results of pilot testing from the previous exercise.
  • Make any changes to your questionnaire (or sampling approach) you think would reduce the potential for bias in your study.

Create a first draft of your limitations section by identifying sources of bias in your survey.

  • Write a bulleted list or paragraph or the potential sources of bias in your study.
  • Remember that all studies, especially student-led studies, have limitations. To the extent you can address these limitations now and feasibly make changes, do so. But keep in mind that your goal should be more to correctly describe the bias in your study than to collect bias-free results. Ultimately, your study needs to get done!
  • Unless researchers change the order of questions as part of their methodology and ensuring accurate responses to questions ↵
  • Not that there are any personal vendettas I'm aware of in academia...everyone gets along great here... ↵
  • Blackstone, A. (2013). Harassment of older adults in the workplace. In P. Brownell & J. J. Kelly (eds.) Ageism and mistreatment of older workers . Springer ↵
  • Smith, T. W. (2009). Trends in willingness to vote for a Black and woman for president, 1972–2008.  GSS Social Change Report No. 55 . Chicago, IL: National Opinion Research Center ↵
  • Enriquez , L. E., Rosales , W. E., Chavarria, K., Morales Hernandez, M., & Valadez, M. (2021). COVID on Campus: Assessing the Impact of the Pandemic on Undocumented College Students. AERA Open. https://doi.org/10.1177/23328584211033576 ↵
  • Mortimer, J. T. (2003).  Working and growing up in America . Cambridge, MA: Harvard University Press. ↵
  • Lindert, J., Lee, L. O., Weisskopf, M. G., McKee, M., Sehner, S., & Spiro III, A. (2020). Threats to Belonging—Stressful Life Events and Mental Health Symptoms in Aging Men—A Longitudinal Cohort Study.  Frontiers in psychiatry ,  11 , 1148. ↵
  • Kleschinsky, J. H., Bosworth, L. B., Nelson, S. E., Walsh, E. K., & Shaffer, H. J. (2009). Persistence pays off: follow-up methods for difficult-to-track longitudinal samples.  Journal of studies on alcohol and drugs ,  70 (5), 751-761. ↵
  • Silver, N. (2021, March 25). The death of polling is greatly exaggerated. FiveThirtyEight . Retrieved from: https://fivethirtyeight.com/features/the-death-of-polling-is-greatly-exaggerated/ ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Peterson, R. A. (2000).  Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
  • Babbie, E. (2010). The practice of social research  (12th ed.). Belmont, CA: Wadsworth. ↵
  • Hopper, J. (2010). How long should a survey be? Retrieved from  http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵
  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996).  Thinking about answers: The application of cognitive processes to survey methodology . San Francisco, CA: Jossey-Bass. ↵
  • Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week’.  Sociological Methodology, 33 , 55-80. ↵
  • Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.),  European review of social psychology  (Vol. 2, pp. 31–50). Chichester, UK: Wiley. ↵
  • Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction.  European Journal of Social Psychology, 18 , 429–442. ↵
  • Schwarz, N. (1999). Self-reports: How the questions shape the answers.  American Psychologist, 54 , 93–105. ↵
  • Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election outcomes.  Public Opinion Quarterly, 62 (3), 291-330. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88 (5), 879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86 (1), 114. ↵
  • This is why my ratemyprofessor.com score is so low. Or that's what I tell myself. ↵
  • Whiteford, L. M., & Trotter II, R. T. (2008). Ethics for anthropological research and practice . Waveland Press. ↵
  • Gordon, E. T. (1991). Anthropology and liberation. In F V Harrison (ed.) Decolonizing anthropology: Moving further toward an anthropology for liberation (pp. 149-167). Arlington, VA: American Anthropological Association. ↵
  • Kline, M. A., Shamsudheen, R., & Broesch, T. (2018). Variation is the universal: Making cultural evolution work in developmental psychology.  Philosophical Transactions of the Royal Society B: Biological Sciences ,  373 (1743), 20170059. Stearman, A. M. (1984). The Yuquí connection: Another look at Sirionó deculturation.  American Anthropologist ,  86 (3), 630-650. ↵
  • Hruschka, D. J., Munira, S., Jesmin, K., Hackman, J., & Tiokhin, L. (2018). Learning from failures of protocol in cross-cultural research.  Proceedings of the National Academy of Sciences ,  115 (45), 11428-11434. ↵
  • Broesch, T., Callaghan, T., Henrich, J., Murphy, C., & Rochat, P. (2011). Cultural variations in children’s mirror self-recognition.  Journal of Cross-Cultural Psychology ,  42 (6), 1018-1029. ↵
  • Van de Vijver, F. J., & Poortinga, Y. H. (1992). Testing in culturally heterogeneous populations: When are cultural loadings undesirable?. European Journal of Psychological Assessment . ↵
  • Waltz, C. F., Strickland, O. L., & Lenz, E. R. (Eds.). (2010). Measurement in nursing and health research (4th ed.) . Springer. ↵
  • Meiring, D., Van de Vijver, A. J. R., Rothmann, S., & Barrick, M. R. (2005). Construct, item and method bias of cognitive and personality tests in South Africa.  SA Journal of Industrial Psychology ,  31 (1), 1-8. ↵
  • Van de Vijver, F. J., & Poortinga, Y. H. (1992). Testing in culturally heterogeneous populations: When are cultural loadings undesirable?.  European Journal of Psychological Assessment . ↵
  • Barrett, H. C. (2020). Deciding what to observe: Thoughts for a post-WEIRD generation.  Evolution and Human Behavior ,  41 (5), 445-453. ↵
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Beyond WEIRD: Towards a broad-based behavioral science.  Behavioral and Brain Sciences ,  33 (2-3), 111. ↵
  • Tucker, B. (2017). From risk and time preferences to cultural models of causality: on the challenges and possibilities of field experiments, with examples from rural Southwestern Madagascar.  Impulsivity , 61-114. ↵
  • Schroeder, D., Chatfield, K., Singh, M., Chennells, R., & Herissone-Kelly, P. (2019).  Equitable research partnerships: a global code of conduct to counter ethics dumping . Springer Nature. ↵
  • Schuller, M. (2010). From activist to applied anthropologist to anthropologist? On the politics of collaboration.  Practicing Anthropology ,  32 (1), 43-47. ↵
  • First Nations Information Governance Centre. (2014). Ownership, control, access and possession (OCAP): The path to First Nations information governance. ↵
  • Fischer, R., & Poortinga, Y. H. (2018). Addressing methodological challenges in culture-comparative research.  Journal of Cross-Cultural Psychology ,  49 (5), 691-712. ↵
  • Hruschka, D. J. (2020). What we look with” is as important as “What we look at.  Evolution and Human Behavior ,  41 (5), 458-459. ↵
  • Purzycki, B. G., & Lang, M. (2019). Identity fusion, outgroup relations, and sacrifice: a cross-cultural test.  Cognition ,  186 , 1-6. ↵

The use of questionnaires to gather data from multiple participants.

the group of people you successfully recruit from your sampling frame to participate in your study

A research instrument consisting of a set of questions (items) intended to capture responses from participants in a standardized manner

Refers to research that is designed specifically to answer the question of whether there is a causal relationship between two variables.

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

a participant answers questions about themselves

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

whether you can practically and ethically complete the research project you propose

Someone who is especially knowledgeable about a topic being studied.

a person who completes a survey on behalf of another person

In measurement, conditions that are subtle and complex that we must use existing knowledge and intuition to define.

The ability of a measurement tool to measure a phenomenon the same way, time after time. Note: Reliability does not imply validity.

study publicly available information or data that has been collected by another person

When a researcher collects data only once from participants using a questionnaire

Researcher collects data from participants at multiple points over an extended period of time using a questionnaire.

A type of longitudinal survey where the researchers gather data at multiple times, but each time they ask different people from the group they are studying because their concern is capturing the sentiment of the group, not the individual people they survey.

A questionnaire that is distributed to participants (in person, by mail, virtually) to complete independently.

A questionnaire that is read to respondents

when a researcher administers a questionnaire verbally to participants

any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

Testing out your research materials in advance on people who are not included as participants in your study.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

a question that asks more than one thing at a time, making it difficult to respond accurately

When a participant answers in a way that they believe is socially the most acceptable answer.

the answers researchers provide to participants to choose from when completing a questionnaire

questions in which the researcher provides all of the response options

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

respondents to a survey who choose neutral response options, even if they have an opinion

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

unintended influences on respondents’ answers because they are not related to the content of the item but to the context in which the item appears.

when the order in which the items are presented affects people’s responses

Social desirability bias occurs when we create questions that lead respondents to answer in ways that don't reflect their genuine thoughts or feelings to avoid being perceived negatively.

When respondents have difficult providing accurate answers to questions due to the passage of time.

Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time.

If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity of the study’s results, especially as this relates to the representativeness of the sample.

Sampling bias is present when our sampling process results in a sample that does not represent our population in some way.

the concept that scores obtained from a measure are similar when employed in different cultural populations

spurious covariance between your independent and dependent variables that is in fact caused by systematic error introduced by culturally insensitive or incompetent research practices

"when the construct measured is not identical across cultures or when behaviors that characterize the construct are not identical across cultures" (Meiring et al., 2005, p. 2)

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Open access
  • Published: 22 August 2024

Using the behaviour change wheel to develop a tailored intervention to overcome general practitioners’ perceived barriers to referring insomnia patients to digital therapeutic sleepio

  • Ohoud Alkhaldi 1 , 2 ,
  • Brian McMillan 3 &
  • John Ainsworth 1 , 4  

BMC Health Services Research volume  24 , Article number:  967 ( 2024 ) Cite this article

Metrics details

Digital therapeutic Sleepio has proven effective in improving sleep quality and decreasing symptoms of anxiety. The National Institute for Health and Care Excellence (NICE) guidance recommends Sleepio as an alternative treatment to usual sleep hygiene education and hypnotic medications. General practitioners (GPs) play a critical role in the adoption of digital therapeutics in patient care. Previous interventions did not adopt theoretical frameworks to systematically understand GPs behaviour toward referring patients to digital therapeutics.

This study aimed to report the systematic and comprehensive development of an intervention to encourage GPs to refer insomnia patients to Sleepio, using the Behaviour Change Wheel (BCW).

The eight steps outlined in the BCW were followed to develop an intervention. The Capability Opportunity Motivation-Behaviour Self-Evaluation Questionnaire (COM-B-Qv1) was adopted to understand GPs perceived facilitators and barriers to refer insomnia patients to Sleepio. The Behaviour Change Technique Taxonomy Version 1 (BCTv1) was thereafter used to identify possible strategies that could be used to facilitate changes in GPs’ behaviour in relation to Sleepio.

The BCW design process resulted in the identification of five intervention functions, three policy categories and five behaviour change techniques (BCTs) as potential active components for an intervention. The intervention includes providing GPs with an orientation about using Sleepio to improve their knowledge and confidence, sending visual reminders to GPs to recommend Sleepio to their patients, providing ongoing technical support.

The BCW can be successfully applied through a systematic process to understand the drivers of GPs’ behaviour and to develop an intervention that can encourage them to refer insomnia patients to Sleepio.

Peer Review reports

Introduction

Insomnia is a health condition that causes difficulty to sleep or a total lack of sleep [ 1 ]. It is estimated to affect one in three adults in the UK, and higher rates are associated with being female, being older and the presence of comorbidities [ 2 , 3 ]. Insomnia is associated with a significant economic burden due to absenteeism, reduced productivity and impaired cognition and mood [ 4 ].

Guidelines recommend cognitive behavioural therapy for insomnia (CBT-I) for the treatment of insomnia rather than pharmacologic and drug therapy due to reported side effects, such as the possibility of developing long-term tolerance and addiction [ 5 ]. Studies have shown that benzodiazepine use is a significant risk factor for fall-related accidents among older adults [ 6 , 7 ].

CBT-I is a psychological treatment that guides patients to change their sleep-related behaviour through a series of techniques in weekly courses that last for five weeks [ 8 ]. However, there are difficulties surrounding CBT-I that limit its prescription to patients, such as issues with accessibility and costs, and poor response [ 8 ].

Sleepio is a digital therapeutic that uses CBT-I and can be accessed through self-referral or general practitioner (GP) referrals. The programme has proven effective in improving sleep quality and decreasing symptoms of anxiety and depression [ 9 ]. In an ambitious attempt to integrate digital therapeutics into patient care, NHS Scotland made Sleepio free of charge to all residents of Scotland in October 2020 [ 10 ].

For GPs, recommending digital therapeutics differs from the standard practice of prescribing medications. Evidence of interventions to improve GPs’ adoption of mobile health (mHealth) apps and/ or digital therapeutics is limited. One interrupted time series study aimed to evaluate the cost-effectiveness of Sleepio in primary care settings in England. As part of the study, attention was given to selected GP practices that involved implementing training and digital prompts for GPs and distributing patient-centred resources and awareness material to practices, along with tailored support [ 11 ]. Some of these implementation strategies targeted patients, while others were tailored to healthcare professionals (HCPs). The study found that a lower level of uptake was observed in areas that lacked involvement in implementation strategies, which suggests that for future work, the impact of resources on Sleepio referrals should be assessed. In another study, the intervention included 40 min of Sleepio training for clinical staff, a protocol for all clinical assessments and a website with logistical tools to assist staff [ 12 ]. There were no measures of the intervention’s impact on GPs’ referral behaviour. Therefore, this paper describes the development of an intervention to encourage GPs to refer insomnia patients to Sleepio.

To understand the current behaviour and select the best interventions that will most likely be beneficial, the behaviour change wheel (BCW) will be used [ 13 ]. This framework provides a comprehensive approach to identifying sources of behaviour and classifying them into the capability, opportunity, motivation, and behaviour (COM-B) model. This model helps to understand the behaviour of interest and select the behaviour change techniques (BCTs) most likely to be effective. The COM-B model is designed to capture factors that affect the target behaviour (physical capability, psychological capability, physical opportunity, social opportunity, reflective motivation and automatic motivation).

The theoretical domain framework (TDF) is a comprehensive framework used in behaviour science and implementation research to determine the key areas that affect behaviour change [ 14 ]. The TDF is used in conjunction with the COM-B model if a more detailed understanding of the behaviour is required.

Several studies in the literature have used the BCW framework to design interventions targeting GPs. One study aimed to assess the effectiveness of a COM-B-based intervention in promoting physical activity among GPs and their patients [ 15 ]. Another study explored the role of GPs in facilitating behaviour change using the TDF and BCW [ 16 ]. While no studies in the literature focused on the use of BCW with GPs in the context of mHealth, the findings of other studies provided insights that BCW could be applied to other disciplines. Overall, the use of the BCW in conjunction with GPs has proven to be valuable in understanding and addressing barriers to behaviour change.

The intervention development followed the BCW developed by Michie et al. [ 13 ]. The process consists of eight steps, as follows:

Step 1: Define the problem

We reviewed the literature regarding barriers to and facilitators of prescribing mHealth apps in general from the point of view of HCPs. By doing this, we aimed to cover all factors that affect behaviour, including environmental, physical and social contexts.

Step 2: Select the target behaviour

In this step, all possible factors that affect behaviour and could be targeted in the intervention were investigated. To determine the target behaviour, the literature on GPs’ use of evidence-based digital therapeutics in the UK was reviewed.

Step 3: Specify the target behaviour

After selecting the target behaviour, specific details were identified, such as who would perform it, what must be done to achieve the desired change and when, where and how often it needed to be done.

Step 4: Identify what needs to change

This step involved behavioural analysis using the COM-B model (capability, opportunity and motivation to recommend a digital therapeutic) to identify which component of the model needed to change to achieve the desired results. The COM-B self-evaluation questionnaire (COM-B-Qv1) was used to understand what it would take for participants to change their behaviour.

Survey development and analysis

GPs in Scotland were invited to take part in the survey because Sleepio is only available for GP referrals in Scotland. The survey was prepared using Qualtrics ( www.qualtrics.com ) and distributed online, mainly through the primary care research network in Scotland. The NHS Research Scoltand (NRS) Primary Care Network is a unit that supports researchers in recruiting participants using electronic databases [ 17 ].

The survey (Supplementary file 1 ) consisted of 27 items, including questions relating to demographics, GPs’ behaviour in terms of recommending Sleepio, and questions based on the COM-B self-evaluation questionnaire (COM-B-Qv1), which is recommended for collecting data during the BCW intervention development process [ 13 ].

The (COM-B-Qv1)18-item scale had a Cronbach’s alpha of 0.910, with 6-item subscale alphas of 0.793 for capabilities, 0.853 for opportunities and 0.812 for motivations. The full scale is available in the supplementary material (Supplementary File 1 includes the survey questions).

The survey was pilot-tested among 12 GPs and 5 health informatics PhD students at the University of Manchester, and feedback was sought about survey items, the format and the time taken for completion before finalising the survey.

GPs were provided with information about the survey (e.g. background and importance of the study, purpose of the study, potential benefits of taking part, and how their personal information would be stored and processed) via an information sheet on the online survey platform. GPs provided consent by selecting the checkbox to confirm that they agreed with the information provided and were happy to participate in the study. Participants received no compensation for completing the questionnaire. The estimated time to complete the survey 4–7 min and was available between available between February and April 2023 (75 days).

Questionnaire data were analysed using IBM SPSS Statistics V.25, including the descriptive data analysis of participant characteristics. The responses from the 5-point Likert scales were combined to create a 3-point scale by combining ‘agree’ and ‘strongly agree’ and ‘disagree’ and ‘strongly disagree’. The frequency and percentage of each COM-B statement response were calculated.

Step 5 & 6: Identify intervention functions and select policy categories

Based on the findings of the questionnaire, we used the BCW to select the most appropriate intervention functions to design the intervention.

Step 7: Select behaviour change techniques

Michie et al. identified 93 possible BCTs, each linked to intervention functions [ 18 ]. In this step, we selected the most effective techniques to produce a successful change in GPs’ prescribing behaviour.

Step 8: Determine the mode of delivery

After selecting BCTs, it is important to consider the mode or modes of delivery most appropriate for the target behaviour. In this step, we considered the difficulty of engaging GPs in research for reasons such as workload and lack of time.

Several studies have revealed that HCPs’ lack of knowledge and awareness of available apps are major barriers to incorporating them into patient care [ 19 , 20 , 21 ]. To overcome these barriers, several studies have emphasised the need to design training for GPs and other allied health professionals to improve their knowledge of the importance of prescribing mHealth apps to patients with long-term conditions [ 20 , 21 , 22 , 23 ]. The cost of using mobile health apps is also a concern for some GPs [ 24 ].

By reviewing the literature in the UK, we found that Sleepio is the first digital therapeutics to receive NICE guidance as an effective digital treatment for insomnia before prescribing sleeping drugs or sleep hygiene. Many studies concluded Sleepio is more effective than usual treatment in reducing symptoms of insomnia in adults [ 25 , 26 ]. However, GPs behaviour toward Sleepio remains unknown. We decided that the intervention should target GPs [ 27 ]. The study is expected to help GPs incorporate digital therapeutics into patient care and improve their confidence in recommending evidence-based apps.

After selecting the target behaviour, further details were determined by answering the questions in Table  1 . GPs can refer insomnia patients to Sleepio if Sleepio is deemed the right treatment option for them.

Step 4: Identify what needs to change

To identify what needed to change, we surveyed GPs in Scotland about their attitudes towards referring insomnia patients to Sleepio. Seventy participants responded to the questionnaire. Five questionnaires were incomplete, leaving sixty-five participants with a full set of data. Table  2 presents the participants’ demographic data.

Participants were asked to rate the extent to which they agreed with each statement. GPs’ ratings on questionnaire statements in each COM-B domain about what would make them recommend Sleepio to their patients are illustrated in Figs.  1 , 2 and 3 .

figure 1

GPs’ Responses to Capability Statements

figure 2

GPs’ Responses to Opportunity Statements

figure 3

GPs’ Responses to Motivation Statements

From the survey, it was determined that the intervention needed to target most components of the COM-B model, with a strong focus on psychological capability, physical opportunity and automatic and reflective motivation.

Psychological capability

About 57% (37/65) of participants reported that knowing the clinical evidence behind the digital therapeutic Sleepio would encourage them to offer it to their patients. Around 56% (36/65) of respondents reported knowing how to determine whether patients would benefit from Sleepio by assessing its clinical suitability and patients’ ability to engage with it. This highlights knowledge, memory, attention and decision processes as important in the TDF to be addressed in the intervention.

Physical opportunity

The cost of digital therapeutics was significantly associated with a reduced likelihood of referring insomnia patients to Sleepio. Participants reported that they would refer patients to Sleepio if it was made freely available to them. Linking that with TDF domains, it was found that environmental context and resources were important in encouraging GPs to recommend Sleepio.

Automatic motivation

GPs reported that making changes to their prescribing habits would facilitate more frequent referrals to Sleepio. GPs need to discuss and recommend Sleepio for any patients who complain about their sleep patterns before prescribing medications. Reinforcement was found to be a crucial TDF domain for inclusion in the intervention.

Reflective motivation

GPs who believe that Sleepio can assist insomnia patients in regaining normal sleeping patterns are more likely to refer them to Sleepio. They need to believe that recommending Sleepio is the best practice. Therefore, targeting beliefs about the outcomes of the intervention would work as a facilitator for changing the target behaviour.

The COM-B behavioural analysis identified five intervention functions: education, training, environmental restructuring, enablement and persuasion. Policy categories that matched our intervention functions included communication/marketing (for instance, using verbal, electronic communication or flyers to improve knowledge of referring patients to Sleepio and health consequences of using Sleepio), guidelines (examples of which include informing GPs of steps for offering Sleepio) and environmental/social planning (e.g., sending a visual reminder to GPs to recommend Sleepio through emails) (Table  3 ).

In total, six behaviour change techniques were selected. The main BCTs selected for encouraging GPs to refer insomnia patients to Sleepio were information about health consequences, instruction on how to perform behaviour, prompts/cues, adding objects to the environment, self-monitoring of behaviour and credible sources.

GPs reported concerns that changing their work habits might cause an extra burden on their daily work schedules. Therefore, to ensure GPs’ engagement, the intervention is designed to be delivered online on an individual level and can be accessed via computers at a convenient time.

This study provides a structured and detailed example of how to design an intervention to target GPs in primary care using the BCW. The BCW framework was used to systematically understand the target behaviour before the intervention was designed in terms of changes to capability, opportunity and motivation (the COM-B system).

GPs reported that they would recommend Sleepio if they had greater capabilities especially in psychological capability domain. This included having better knowledge of convincing evidence of the benefits of Sleepio and knowing how to determine if someone would benefit from Sleepio. This is in line with previous recommendations regarding the need to design training for GPs and other allied health professionals for prescribing mHealth apps to patients with long-term conditions [ 19 , 20 , 21 ].

However, the findings of this study showed that providing GPs with information about Sleepio may not be enough to produce a change in the target behaviour. The survey results indicated that in relation to opportunity, GPs would recommend Sleepio more often if it was freely available to patients. This indicates a potential lack of awareness among GPs regarding Sleepio’s existing availability for GP referral to all adults in Scotland. It was found that this factor was the overarching barrier to referring patients to digital therapeutic Sleepio. In line with the findings of the current study, a previous study reported that the cost of apps was significantly associated with the likelihood of prescribing digital health technologies, suggesting that as cost increases, the rate of digital health technology prescriptions falls [ 24 ]. Moreover, a review of an intervention study found that while providing education and skills training is likely to improve nutritionists’ self-efficacy, having the app easily and freely integrated into dietetic care is essential to influence the prescribing of apps [ 28 ].

In relation to motivation to recommend Sleepio, GPs reported needing to develop a pattern of doing it routinely and have a stronger sense that it is best practice. GPs’ responses concerning motivation reflected that while they had a strong motivation to incorporate digital therapeutics into patient care or develop habits of recommending Sleepio to their patients, they may not have done so primarily due to the perceived difficulty of accessing Sleepio (opportunity) or a lack of knowledge (capability). Therefore, addressing barriers related to opportunity and capability is likely to produce changes in motivation [ 29 ].

In a future intervention, a number of BCTs will be included to maximise successful changes in the target behaviour, such as the inclusion of evidence and scientific rationale for using a digital therapeutic (Sleepio) to treat insomnia, providing GPs with clear steps for offering Sleepio and making sure that GPs are aware that Sleepio is made free to all adults in Scotland. Additionally, to address GPs concerns about the increased workload and time demands when apps are integrated into daily work activities [ 30 , 31 , 32 ], interventions should be delivered online. This will allow them to access the training materials at their convenience. We are in the process of designing an intervention and piloting it to improve GP referrals to Sleepio to treat insomnia patients as an alternative to usual treatments.

Strengths and limitations

This study has a number of strengths. To the authors’ knowledge, this is the first study to investigate the influence of both behavioural and environmental determinants on GPs’ referral attitudes towards the digital therapeutic Sleepio.

The BCW provided a systematic approach to achieving a better understanding of GPs’ perceived barriers to incorporating this digital therapeutic in routine care, and it was found effective in designing an intervention to target GPs’ needs. Furthermore, the COM-B self-evaluation questionnaire (COM-B-Qv1) provided information on self-reported behavioural determinants of Sleepio referrals, which enabled participants to consider a wide spectrum of factors relating to these (i.e. the capability, opportunity and motivation subscales).

With regard to study limitations, we recruited GPs in Scotland only, which reduces the study’s generalisability. Generalisability was also affected by the limited sample size, although the study was advertised using a research network in Scotland. To enhance generalisability, future studies should combine multiple approaches to increase GP participation, such as using social media and targeting GPs in conferences. A possible source of bias is that GPs who were interested in mobile health and cognitive behaviour therapy were more likely to be motivated to complete the survey, leading to self-selection bias.

Conclusions

This study identified a number of intervention components that can be applied to encourage GPs to recommend Sleepio for CBT-I treatment as an alternative to medications for insomnia. This study highlighted the importance of interventions targeting multiple levels of behaviour to produce change. Six BCTs were identified as core methods that affect psychological capability, physical opportunity, automatic motivation and reflective motivation of GPs’ behaviour with regard to referring patients to Sleepio. Future studies should evaluate the feasibility of an intervention based on the findings reported here.

Data availability

The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials.

Abbreviations

Behaviour Change Techniques

Behaviour Change Technique Taxonomy Version 1

Behaviour Change Wheel

Cognitive Behavioural Therapy for Insomnia

Capability, Opportunity, Motivation, and Behaviour model

Capability Opportunity Motivation-Behaviour Self-Evaluation Questionnaire

General practitioners

Healthcare Professionals

National Institute for Health and Care Excellence

The NHS Research Scotland

The theoretical domain framework

NICE, Insomnia. What is it? 2024. https://cks.nice.org.uk/topics/insomnia/background-information/definition/ . Accessed 27 Feb 2024.

Saddichha S. Diagnosis and treatment of chronic insomnia. 2010. https://doi.org/https://doi.org/10.4103%2F0972-2327.64628

Insomnia NHS, Inform. 2023. https://www.nhsinform.scot/illnesses-and-conditions/mental-health/insomnia/#introduction . Accessed 18 Dec 2023.

Daley M, Morin CM, LeBlanc M, Grégoire J-P, Savard J. The economic burden of insomnia: direct and indirect costs for individuals with insomnia syndrome, insomnia symptoms, and good sleepers. Sleep. 2009;32:55–64.

PubMed   PubMed Central   Google Scholar  

Curran HV, Collins RH, Fletcher SL, Kee S, Woods B, Iliffe S. Older adults and withdrawal from benzodiazepine hypnotics in general practice: effects on cognitive function, sleep, mood and quality of life. Psychol Med. 2003;33:1223–37.

Article   CAS   PubMed   Google Scholar  

Panneman MJM, Goettsch WG, Kramarz P, Herings RMC. The costs of benzodiazepine-associated hospital-treated fall injuries in the EU: a Pharmo study. Drugs Aging. 2003;20:833–9.

Article   PubMed   Google Scholar  

Neutel CI, Perry S, Maxwell C. Medication use and risk of falls. Pharmacoepidemiol Drug Saf. 2002;11:97–104.

Madari S, Golebiowski R, Mansukhani MP, Kolla BP. Pharmacological management of Insomnia. Neurotherapeutics. 2021;18:44–52.

Article   PubMed   PubMed Central   Google Scholar  

Espie CA, Emsley R, Kyle SD, Gordon C, Drake CL, Siriwardena AN, et al. Effect of Digital Cognitive Behavioral Therapy for Insomnia on Health, Psychological Well-being, and sleep-related quality of life: a Randomized Clinical Trial. JAMA Psychiatry. 2019;76:21–30.

Downey A. Digital therapeutics part of NHS Scotland services in ‘world-first’ deal. Digital health. 2021. https://www.digitalhealth.net/2021/10/digital-therapeutics-part-of-nhs-scotland-services-in-world-first-deal/ . Accessed 27 Feb 2024.

Sampson C, Bell E, Cole A, Miller CB, Marriott T, Williams M, et al. Digital cognitive behavioural therapy for insomnia and primary care costs in England: an interrupted time series analysis. BJGP Open. 2022. https://doi.org/10.3399/BJGPO.2021.0146 .

Stott R, Pimm J, Emsley R, Miller CB, Espie CA. Does adjunctive digital CBT for Insomnia improve clinical outcomes in an improving access to psychological therapies service? Behav Res Ther. 2021;144:103922.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011. https://doi.org/10.1186/1748-5908-6-42 .

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the theoretical domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77.

Reid H, Smith R, Williamson W, Baldock J, Caterson J, Kluzek S, et al. Use of the behaviour change wheel to improve everyday person-centred conversations on physical activity across healthcare. BMC Public Health. 2022;22:1784.

Mather M, Pettigrew LM, Navaratnam S. Inf Syst Rev. 2022;11:180.

Article   Google Scholar  

NHS. Primary Care. The NRS Primary Care Network. 2023. https://www.nhsresearchscotland.org.uk/research-areas/primary-care . Accessed 18 Dec 2023.

Michie S, Johnston M, Francis J, Hardeman W, Eccles M. From theory to intervention: Mapping theoretically derived behavioural determinants to Behaviour Change techniques. Appl Psychol. 2008;57:660–80.

Byambasuren O, Beller E, Glasziou P. Current knowledge and Adoption of Mobile Health apps among Australian General practitioners: Survey Study. JMIR Mhealth Uhealth. 2019;7:e13199.

Byambasuren O, Beller E, Hoffmann T, Glasziou P. mHealth App prescription in Australian General Practice: Pre-post Study. JMIR Mhealth Uhealth. 2020;8:e16497.

Kayyali R, Peletidi A, Ismail M, Hashim Z, Bandeira P, Bonnah J. Awareness and use of mHealth apps: a study from England. Pharmacy. 2017;5.

Zhang Y, Li X, Luo S, Liu C, Xie Y, Guo J, et al. Use, perspectives, and attitudes regarding Diabetes Management Mobile apps among diabetes patients and diabetologists in China: National web-based survey. JMIR Mhealth Uhealth. 2019;7:e12658.

Slevin P, Kessie T, Cullen J, Butler MW, Donnelly SC, Caulfield B. A qualitative study of chronic obstructive pulmonary disease patient perceptions of the barriers and facilitators to adopting digital health technology. Digit Health. 2019;5:2055207619871729–2055207619871729.

Leigh S, Ashall-Payne L, Andrews T. Barriers and facilitators to the Adoption of Mobile Health among Health Care professionals from the United Kingdom: Discrete Choice Experiment. JMIR Mhealth Uhealth. 2020. https://doi.org/10.2196/17704

Espie CA, Kyle SD, Williams C, Ong JC, Douglas NJ, Hames P, et al. A randomized, placebo-controlled trial of online cognitive behavioral therapy for chronic insomnia disorder delivered via an automated media-rich web application. Sleep. 2012;35:769–81.

Henry AL, Miller CB, Emsley R, Sheaves B, Freeman D, Luik AI, et al. Does treating insomnia with digital cognitive behavioural therapy (Sleepio) mediate improvements in anxiety for those with insomnia and comorbid anxiety? An analysis using individual participant data from two large randomised controlled trials. J Affect Disord. 2023;339:58–63.

NICE. NICE recommends offering app-based treatment for people with insomnia instead of sleeping pills. 2022. https://www.nice.org.uk/news/article/nice-recommends-offering-app-based-treatment-for-people-with-insomnia-instead-of-sleeping-pills

Chen J, Allman-Farinelli M. Impact of training and Integration of Apps into Dietetic Practice on dietitians’ self-efficacy with using Mobile Health apps and patient satisfaction. JMIR Mhealth Uhealth. 2019;7:e12349.

Michie S, Atkins LWR. The Behaviour Change Wheel: a Guide to Designing interventions. London: Silverback Publishing; 2014.

Google Scholar  

Makhni S, Zlatopolsky R, Fasihuddin F, Aponte M, Rogers J, Atreja A. Usability and Learnability of RxUniverse, an Enterprise-Wide App Prescribing Platform Used in an Academic Tertiary Care Hospital. AMIA Annu Symp Proc. 2017;2017:1225–32.

Lopez Segui F, Pratdepadua Bufill C, Abdon Gimenez N, Martinez Roldan J, Garcia Cuyas F. The prescription of mobile apps by primary care teams: a Pilot Project in Catalonia. JMIR Mhealth Uhealth. 2018;6:e10701.

Sarradon-Eck A, Bouchez T, Auroy L, Schuers M, Darmon D. Attitudes of General Practitioners toward Prescription of Mobile Health apps: qualitative study. JMIR Mhealth Uhealth. 2021;9:e21795.

Download references

Acknowledgements

This research is part of a PhD study sponsored by the Ministry of Education in Saudi Arabia. Professor Ainsworth is funded by the National Institute for Health and Care Research (NIHR) Manchester Biomedical Research Centre. Brian McMillan is funded by an NIHR Advanced Fellowship (reference: NIHR300887). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.

Not applicable.

Author information

Authors and affiliations.

Division of Informatics, Imaging and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Oxford Road, Manchester, M13 9PM, UK

Ohoud Alkhaldi & John Ainsworth

Health Information Management and Technology Department, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia

Ohoud Alkhaldi

Centre for Primary Care and Health Services Research, Division of Population Health, Health Services Research and Primary Care, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, M13 9PM, UK

Brian McMillan

NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK

John Ainsworth

You can also search for this author in PubMed   Google Scholar

Contributions

The authors confirm contribution to the paper as follows: Study conception and design: OA, BM, JA; Data collection: OA; Analysis and interpretation of results: OA; Draft manuscript preparation: OA, BM, JA; All authors reviewed the results and approved the final version of the manuscript.

Corresponding author

Correspondence to Ohoud Alkhaldi .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was not required because this study involves healthcare staff by virtue of their professional role and presents no material ethical issues. All subjects gave their informed consent for inclusion before they participated in the study.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Alkhaldi, O., McMillan, B. & Ainsworth, J. Using the behaviour change wheel to develop a tailored intervention to overcome general practitioners’ perceived barriers to referring insomnia patients to digital therapeutic sleepio. BMC Health Serv Res 24 , 967 (2024). https://doi.org/10.1186/s12913-024-11384-3

Download citation

Received : 14 May 2024

Accepted : 01 August 2024

Published : 22 August 2024

DOI : https://doi.org/10.1186/s12913-024-11384-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Behaviour change
  • Digital therapeutic

BMC Health Services Research

ISSN: 1472-6963

research design of a survey

IMAGES

  1. Master Survey Design: A 10-step Guide with Examples

    research design of a survey

  2. Good Survey Design with Examples

    research design of a survey

  3. 12 Questionnaire Design Tips for Successful Surveys

    research design of a survey

  4. Flowchart for the survey research design.

    research design of a survey

  5. Survey Study Design

    research design of a survey

  6. Survey Research

    research design of a survey

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  3. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  4. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  5. Research Design

    Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions. Introduction. Step 1. Step 2.

  6. Survey Research: Definition, Examples and Methods

    Survey research design. Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed ...

  7. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  8. Designing Survey Research

    Survey research is common in academic research, as well as in business, political, and other kinds of projects that aim to get a sense of the preferences or attitudes of a group of people. Surveys are primarily associated with quantitative research, but can also be used in qualitative inquiries. As Andres (2012) points out, designing a study ...

  9. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  10. PDF Effective survey design for research: Asking the right questions to get

    Conducting survey research requires several component processes: survey design, sampling and recruitment, data collection, data analysis, and reporting. This guide will focus primarily on survey design, but these other steps should inform your survey design. Attending to matters of diversity, equity, and inclusion is also important because they ...

  11. A quick guide to survey research

    Keywords: Survey, Questionnaire, Design, Research, Guide Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and ...

  12. PDF SURVEY AND CORRELATIONAL RESEARCH DESIGNS

    The survey research design is the use of a survey, administered either in written form or orally, to quan-tify, describe, or characterize an individual or a group. A survey is a series of questions or statements, called items, used in a questionnaire or an interview to mea-sure the self-reports or responses of respondents.

  13. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  14. (Pdf) Introduction to Survey Research Design

    collected from every respondent. 4. Analysis needs: use survey data to compliment existing. data from secondary sources. BASIC SURVEY DESIGNS. • Cross-Sectional Surveys: Data are collected at ...

  15. Designing, Conducting, and Reporting Survey Studies: A Primer for

    Burns et al., 2008 12. A guide for the design and conduct of self-administered surveys of clinicians. This guide includes statements on designing, conducting, and reporting web- and non-web-based surveys of clinicians' knowledge, attitude, and practice. The statements are based on a literature review, but not the Delphi method.

  16. Survey Research Design

    Survey Research Methods. Survey research designs can be broadly classified as being either quantitative or qualitative. A quantitative survey design is typically administered during large-scale research and primarily relies on using closed questions to obtain information that c an be analysed relatively quickly, such as multiple-choice or dichotomous response answers.

  17. PDF Chapter 3 Good Practices in Survey Design Step-by-Step

    commission, design and run a perception s. Step 1. Define survey objectives and target group. Define the objectives. Define the final use of the results. n surve. is the adequate tool. Define target group(s) Step 2. Draft survey questions. Set up discussions with members of a target group to identify key issues.

  18. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  19. Chapter 3 -- Survey Research Design and Quantitative Methods of ...

    The Field Research Corporation is a widely-respected survey research firm and is used extensively by the media, politicians, and academic researchers. Since a survey can be no better than the quality of the sample, it is essential to understand the basic principles of sampling.

  20. PDF Survey Design

    A survey is a systematic method for gathering information from (a sample of) entities for the purposes of constructing quantitative descriptors of the attributes of the larger population of which the entities are members. Surveys are conducted to gather information that reflects population's attitudes, behaviors, opinions and beliefs that ...

  21. Survey Research: Definition, Design, Methods and Examples

    Survey research is a quantitative research method that involves collecting data from a sample of individuals using standardized questionnaires or surveys. The goal of survey research is to measure the attitudes, opinions, behaviors, and characteristics of a target population. Surveys can be conducted through various means, including phone, mail ...

  22. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  23. 12. Survey design

    Researchers employing survey research as a research design enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. In a study by Blackstone (2013) [3] on older people's experiences in the workplace , researchers were able to mail a written questionnaire to around 500 people who lived ...

  24. Using the behaviour change wheel to develop a tailored intervention to

    The survey was prepared using Qualtrics (www.qualtrics.com) and distributed online, mainly through the primary care research network in Scotland. The NHS Research Scoltand (NRS) Primary Care Network is a unit that supports researchers in recruiting participants using electronic databases [ 17 ].

  25. research equity survey design community jobs

    Engage in survey design, development, and deployment for new initiatives. ... In collaboration with the Research Design Manager, the candidate will supervise research activities, guide the application of equitable methods, lead community councils and partner meetings, and mentor junior and entry-level research staff as they collaboratively ...