The Guide to Interview Analysis
- What is Interview Analysis?
- Advantages of Interviews in Research
- Disadvantages of Interviews in Research
- Ethical Considerations in Interviews
Introduction
How to prepare for an interview, how to create an interview guide, common errors when preparing for an interview.
- Recruitment & Sampling for Research Interviews
- Interview Design
- How to Formulate Interview Questions
- Rapport in Interviews
- Social Desirability Bias
- Interviewer Effect
- Types of Research Interviews
- Face-to-Face Interviews
- Focus Group Interviews
- Email Interviews
- Telephone Interviews
- Stimulated Recall Interviews
- Interviews vs. Surveys
- Interviews vs Questionnaires
- Interviews and Interrogations
- How to Transcribe Interviews?
- Verbatim Transcription
- Clean Interview Transcriptions
- Manual Interview Transcription
- Automated Interview Transcription
- How to Annotate Research Interviews?
- Formatting and Anonymizing Interviews
- Analyzing Interviews
- Coding Interviews
- Reporting & Presenting Interview Findings
In qualitative research, interviews are invaluable for gathering rich, detailed insights into participants' experiences, perceptions, and emotions. However, the success of these interviews relies heavily on thorough preparation, which ensures that the interview process is both effective and ethical. Without proper planning, researchers risk collecting shallow or irrelevant data, which can undermine the integrity of their study. This article explores the steps necessary to prepare qualitative research interviews, common errors to avoid, and why meticulous preparation is critical for obtaining valuable data.
The importance of preparing for an interview in qualitative research cannot be overstated. Effective interview preparation facilitates smooth interviews, yielding high-quality data while respecting the participants' rights and comfort. A well-prepared interviewer develops thoughtful, open-ended interview questions directly linked to the research objectives, allowing for richer, more detailed responses. Preparation also allows the interviewer to anticipate potential challenges, such as logistical issues or sensitive topics, and address them proactively. In essence, thorough interview preparation is essential to ensure the ethical conduct of the research and the collection of meaningful, insightful data.
Preparation involves developing questions and becoming familiar with the participant's background and context. This helps build rapport and encourages participants to share openly during the interview. When participants trust the interviewer, they are more likely to provide honest, detailed responses, which enhances the quality of the data. It also helps the researcher to follow up on points of interest more effectively, as familiarity with the participant’s context enables deeper and more informed probing.
Well-prepared interviews minimize the risk of unproductive tangents. You’re less likely to ask leading questions or get sidetracked when you go into an interview with a clear strategy. It also prepares you for potential challenges, such as participants providing incomplete answers or hesitating to engage. Anticipating these issues allows you to have strategies in place to manage them, ensuring the interview remains productive.
In qualitative research, preparing for an interview is not just about drafting questions—it's about creating a conducive environment for meaningful conversation, ensuring the collection of rich, relevant data, and ultimately contributing to the overall rigour of the research.
The success of an interview is based on the quality of the preparation and research conducted before the interview. Preparation will lead to the best questions which will lead to the best data collection possible. Here are some important reminders when preparing for an interview.
Understanding the research question
The foundation of any qualitative interview lies in a clear and well-defined research question. This question shapes the interview questions you explore with your participants and determines the data you will collect. For interview preparation, research is essential to develop a thorough understanding of the topic. Researchers must review existing literature and justify the need for their research to ensure the interview questions address unexplored areas and create meaningful discussions.
Questions should encourage participants to talk freely and in detail, so the interviewer can gather rich information. For example, an interviewer might ask participants to describe a specific experience rather than asking questions that can be answered with a simple "yes" or "no". By developing a clear interview guide, the interviewer can create questions that link directly to the research question while allowing for open-ended answers.
Developing an effective interview guide
An interview guide is a structured framework used in qualitative research to direct the conversation during interviews. It is an essential tool for maintaining focus while allowing for flexibility during the interview. The guide typically consists of a list of open-ended questions or key topics for exploring participants' experiences, opinions, and feelings related to the research topic.
The purpose of an interview guide is twofold. First, it ensures that all relevant topics are covered across different interviews, enhancing consistency. Second, it allows interviewers to probe further into participants’ responses, encouraging deeper insights that align with the research objectives. Although it provides structure, the guide is not rigid, allowing for deviations based on the natural flow of the conversation, ensuring richer data collection.
In qualitative research, interview guides are typically used in semi-structured or unstructured interviews. They are especially useful for creating a balance between guiding the discussion and giving the interviewee enough freedom to share detailed, meaningful information.
Pilot testing
Before conducting the actual interview, a pilot interview is an important step. It allows the interviewer to practice conducting the interview and test the flow of the interview guide. Through a review of the pilot interview, interviewers can identify unclear or irrelevant questions and make adjustments accordingly. This process also helps interviewers estimate the time required for each interview and ensures that the guide covers all relevant topics without overwhelming the participant.
Pilot testing also gives the researcher a chance to practice asking questions naturally, adjusting to the conversational flow that qualitative interviews often require. Pilot tests can be very helpful, significantly developing a researcher's knowledge and understanding of what to expect for the actual interview.
Preparing for ethical considerations
Ethical interview preparation is essential for ensuring that participants provide informed consent and are aware of their rights throughout the study. Ethical considerations are of paramount importance to protect participants’ privacy and emotional well-being. Participants must feel secure that their responses will be kept confidential, and the interviewer must anticipate any potentially sensitive topics that might arise.
Additionally, researchers should examine the emotional or psychological risks associated with certain topics and be prepared to offer support or referrals if needed. This ensures that the interview process remains respectful and professional while collecting useful research data.
Building rapport with participants
The interviewer's ability to build rapport with participants is a vital skill that greatly influences each interview. Establishing trust at the beginning of the interview helps participants feel at ease and encourages them to talk openly. By focusing on the person rather than just the data, interviewers can facilitate more natural conversations.
Strong communication skills are necessary to maintain the flow of conversation and keep the focus on the topic. During the interviewing process, the interviewer must practice active listening, demonstrate empathy, and avoid rushing participants through their responses.
Before the interview, researchers should familiarize themselves with the participants' backgrounds, where relevant, to understand their context. During the interview, the researcher should encourage open dialogue while maintaining a non-judgmental stance. This rapport enables participants to provide richer, more detailed responses, ultimately enhancing the quality of the data collected.
Anticipating logistical challenges
Careful attention to logistics is also part of effective interview preparation. Whether the interview is in-person or remote, the interviewer must ensure the setting is conducive to open and comfortable communication. For example, interviewers should choose a quiet location and test any necessary equipment beforehand.
Being well-prepared and anticipating potential issues, such as technical difficulties or external distractions, helps to foster an enjoyable interview and allows the interviewer to focus on obtaining valuable data.
Scheduling flexibility is also important, as participants' availability may vary. Researchers should be prepared to accommodate different time zones or personal schedules to facilitate participation. This attention to logistics helps create a smooth and uninterrupted interview experience.
Analyze your interviews with ATLAS.ti
Turn your transcriptions into key insights with our powerful tools. Download a free trial today.
Designing an interview guide in qualitative research is essential for structuring in-depth conversations that capture participants' nuanced experiences and reflections. Seidman (2006) emphasizes a flexible, participant-centered approach to guide development, which balances broad thematic direction with open-ended questions, facilitating a natural conversational flow. Below, we outline key considerations for constructing an interview guide that promotes meaningful engagement in qualitative studies:
Define the research question and objectives : Clarifying the research’s focus and the specific experiences under investigation provides the foundation for an effective interview guide (Seidman, 2006). By establishing a clear scope, researchers can ensure that the guide remains aligned with the core objectives of the study.
Create a framework that outlines the main concepts to study : structuring interview guides by organizing questions into overarching themes rather than prescribing fixed questions. The guide should accommodate a series of themes that broadly frame the participant’s narrative. In Seidman’s three-interview series model, these themes progress from a life history focus to current lived experience, and finally to reflection on meaning. Each theme allows the researcher to understand the participant’s experiences deeply and within the context of their personal narrative.
Design open-ended questions about each concept : The questions have to be in every-day language, not include jargon, and reflect on operational definitions of concepts to think about how questions could be crafted about that concept. The effectiveness of an interview guide lies in its capacity to elicit detailed and authentic responses. Seidman (2006) underscores the importance of open-ended questions, which invite participants to reconstruct their experiences rather than simply answer direct prompts. Questions should encourage narrative depth, for example: "Can you describe how you first became interested in your field?" "What does a typical day look like for you in your role?" These non-directive questions foster participant agency, enabling them to emphasize details personally significant to their experience.
Verify the order of questions : Questions need to go from broad to more specific, or more sensitive, challenging and direct. Make sure you have opening and closing questions.
Add prompts and probes to questions : While open-ended questions drive the interview, it is recommended to use specific prompts to deepen understanding when necessary. Prompts such as “Can you give an example?” or “What were your thoughts at that moment?” allow the interviewer to explore topics that emerge naturally from participants' responses, while still maintaining a non-intrusive approach.
Minimize directing the conversation : To capture the participant’s authentic experience, the guide should avoid leading or suggestive questions that may direct responses toward particular themes or values. Instead of prompting participants with, "Do you find your work rewarding?" researchers might ask, "How do you feel about your experience in this role?". This non-directive approach ensures that participants provide insights shaped by their unique perspectives rather than conforming to anticipated outcomes.
Allow for flexibility and adapt to participant responses : It is important to maintain flexibility in qualitative interviewing, allowing the guide to adapt according to the participant’s narrative. If a participant organically addresses certain themes, the researcher should adjust questions to avoid redundancy and follow the natural trajectory of the conversation. This flexibility is particularly beneficial in phenomenological approaches, where understanding participants’ lived experiences takes precedence over rigid adherence to a predefined question list.
Embrace silence as a tool for reflection : The guide should remind researchers to utilize silence and pauses strategically. According to Seidman (2006), allowing participants moments of reflection can often lead to deeper, more thoughtful insights. A well-designed guide includes prompts or notes encouraging the interviewer to embrace brief silences, providing space for participants to contemplate and expand upon their answers.
Pilot and revise the interview guide : It is important to check the wording of questions and revise for leading questions that may elicit social desirability bias and other errors.
By following these steps, you can create a well-structured interview guide that balances consistency with the flexibility to explore participant responses in depth. By focusing on open-ended questions and thematic progression, researchers can explore the depth and complexity of participants' lived experiences, fostering a qualitative study rich in insight and authenticity. This method not only respects participants' agency in the interview process but also aligns with qualitative research's broader goal of understanding the human experience from a contextually grounded perspective.
While thorough preparation is key, there are several common errors that researchers may encounter during the interview preparation process.
Overloading the interview guide
One of the most frequent mistakes is creating an interview guide that is too long or packed with too many questions. This can overwhelm participants and limit the depth of their responses. A well-prepared interview guide focuses on key themes and leaves room for follow-up questions, allowing participants to explore their thoughts more fully.
Failing to account for ethical concerns
Ethical considerations are often underestimated during preparation. Researchers may overlook the need for informed consent or underestimate the emotional impact of certain topics. Ensuring that participants understand their rights and the purpose of the study, along with obtaining approval from IRBs or ethics committees, is essential for conducting ethical interviews.
Neglecting rapport-building
Some researchers may jump straight into the interview questions without establishing rapport with the participant. This can create an overly formal or uncomfortable environment where participants may hesitate to share personal insights. Taking time to build trust at the start of the interview leads to more meaningful and open conversations.
Ignoring logistical issues
Poor logistical planning can disrupt the flow of an interview. For instance, technical difficulties during remote interviews, background noise, or interruptions can affect the participant’s comfort and willingness to share. Overlooking these details can undermine the effectiveness of the interview.
Effective interview preparation in qualitative research requires a balance between understanding the research topic, designing thoughtful questions, and planning the logistical and ethical aspects of the process. Researchers can conduct interviews that yield rich, insightful data by avoiding common errors such as overloading the interview guide, leading participants, or neglecting rapport. Properly prepared interviews enhance the quality of the research and ensure a respectful and ethical experience for participants.
- Bryman, A. (2016). Social research methods (5th ed.). Oxford University Press. https://global.oup.com/academic/product/social-research-methods-9780199689453
- Gerson, K., & Damaske, S. (2020). The science and art of interviewing. Oxford University Press. https://doi.org/10.1093/oso/9780199324286.001.0001
- Marecek, J., & Magnusson, E. (2015). Doing interview-based qualitative research: A learner’s guide. Cambridge University Press. https://doi.org/10.1017/CBO9781107449893.005
ATLAS.ti is there for you at every step of your interviews
From analyzing your transcript to the key insights, ATLAS.ti is there for you. See how with a free trial.
Chapter 11. Interviewing
Introduction.
Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.
An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.
Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).
Types of Interviews
There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.
Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).
Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.
Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.
Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.
In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.
Creating an Interview Guide
Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.
Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.
There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.
Researcher Note
Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.
Type of Questions
Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”
Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.
Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”
Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).
Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.
Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.
Background/Demographic Questions
You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.
Temporality
Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.
Question Sequence
Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.
Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.
Probes and Flexibility
Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]
The Final Question
Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!
Conducting the Interview
Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:
- Introduce myself and reintroduce the study
- Provide consent form and ask them to sign and retain/return copy
- Ask if they have any questions about the study before we begin
- Ask if I can begin recording
- Ask questions (from interview guide)
- Turn off the recording device
- Ask if they are willing to fill out my demographic questionnaire
- Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
- Thank them and depart
A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.
What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.
After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.
Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.
One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.
—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality
How Many Interviews?
Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.
Your Final Product: The Ideal Interview Transcript
A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.
Final Tips from an Interview Master
Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).
Sample: Demographic Questionnaire
ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success
Supplementary Demographic Questionnaire
Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!
What best captures your race/ethnicity? (please check any/all that apply)
- White (Non Hispanic/Latina/o/x)
- Black or African American
- Hispanic, Latino/a/x of Spanish
- Asian or Asian American
- American Indian or Alaska Native
- Middle Eastern or North African
- Native Hawaiian or Pacific Islander
- Other : (Please write in: ________________)
What is your current position?
- Grad Student
- Full Professor
Please check any and all of the following that apply to you:
- I identify as a working-class academic
- I was the first in my family to graduate from college
- I grew up poor
What best reflects your gender?
- Transgender female/Transgender woman
- Transgender male/Transgender man
- Gender queer/ Gender nonconforming
Anything else you would like us to know about you?
Example: Interview Guide
In this example, follow-up prompts are italicized. Note the sequence of questions. That second question often elicits an entire life history , answering several later questions in advance.
Introduction Script/Question
Thank you for participating in our survey of ASA members who identify as first-generation or working-class. As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify. Your participation in this interview will help advance our knowledge in this area.
- The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class? Why were you willing to be interviewed?
- How did you decide to become a sociologist?
- Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living? What kind of high school did you attend?)
- Has this identity been salient to your experience? (how? How much?)
- How welcoming was your grad program? Your first academic employer?
- Why did you decide to pursue sociology at the graduate level?
- Did you experience culture shock in college? In graduate school?
- Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
- Were you mentored? How did this work (not work)? How might it?
- What did you consider when deciding where to go to grad school? Where to apply for your first position?
- What, to you, is a mark of career success? Have you achieved that success? What has helped or hindered your pursuit of success?
- Do you think sociology, as a field, cares about prestige?
- Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
- What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
- Do you have any debt from college/grad school? Are you concerned about this? Could you explain more about how you paid for college/grad school? (here, include assistance from family, fellowships, scholarships, etc.)
- (You’ve mentioned issues or obstacles you had because of your background.) What could have helped? Or, who or what did? Can you think of fortuitous moments in your career?
- Do you have any regrets about the path you took?
- Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?
Further Readings
Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.
Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.
Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.
Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.
Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!
Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.
Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.
Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.
- I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
- Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
- I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵
A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .
A document listing key questions and question areas for use during an interview. It is used most often for semi-structured interviews. A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview. Most IRBs require the inclusion of the interview guide in applications for review. See also interview and semi-structured interview .
A data-collection method that relies on casual, conversational, and informal interviewing. Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously. This is a common data-collection technique among ethnographers. Compare to the semi-structured or in-depth interview .
A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview. The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences. It is sometimes referred to as an “in-depth” interview. See also interview and interview guide .
The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context. Observational methods are the key tools employed by ethnographers and Grounded Theory .
Follow-up questions used in a semi-structured interview to elicit further elaboration. Suggested prompts can be included in the interview guide to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.
A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects. The questions are also the kind that elicits short answers, and the data is more “informative” than probing. This is often used in mixed-methods studies, accompanying a survey instrument. Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead. See also interview.
The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted. Achieving saturation is often used as the justification for the final sample size.
An interview variant in which a person’s life story is elicited in a narrative form. Turning points and key themes are established by the researcher and used as data points for further analysis.
Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Published: 15 September 2022
Interviews in the social sciences
- Eleanor Knott ORCID: orcid.org/0000-0002-9131-3939 1 ,
- Aliya Hamid Rao ORCID: orcid.org/0000-0003-0674-4206 1 ,
- Kate Summers ORCID: orcid.org/0000-0001-9964-0259 1 &
- Chana Teeger ORCID: orcid.org/0000-0002-5046-8280 1
Nature Reviews Methods Primers volume 2 , Article number: 73 ( 2022 ) Cite this article
748k Accesses
104 Citations
190 Altmetric
Metrics details
- Interdisciplinary studies
In-depth interviews are a versatile form of qualitative data collection used by researchers across the social sciences. They allow individuals to explain, in their own words, how they understand and interpret the world around them. Interviews represent a deceptively familiar social encounter in which people interact by asking and answering questions. They are, however, a very particular type of conversation, guided by the researcher and used for specific ends. This dynamic introduces a range of methodological, analytical and ethical challenges, for novice researchers in particular. In this Primer, we focus on the stages and challenges of designing and conducting an interview project and analysing data from it, as well as strategies to overcome such challenges.
Similar content being viewed by others
The fundamental importance of method to theory
How ‘going online’ mediates the challenges of policy elite interviews
Participatory action research
Introduction.
In-depth interviews are a qualitative research method that follow a deceptively familiar logic of human interaction: they are conversations where people talk with each other, interact and pose and answer questions 1 . An interview is a specific type of interaction in which — usually and predominantly — a researcher asks questions about someone’s life experience, opinions, dreams, fears and hopes and the interview participant answers the questions 1 .
Interviews will often be used as a standalone method or combined with other qualitative methods, such as focus groups or ethnography, or quantitative methods, such as surveys or experiments. Although interviewing is a frequently used method, it should not be viewed as an easy default for qualitative researchers 2 . Interviews are also not suited to answering all qualitative research questions, but instead have specific strengths that should guide whether or not they are deployed in a research project. Whereas ethnography might be better suited to trying to observe what people do, interviews provide a space for extended conversations that allow the researcher insights into how people think and what they believe. Quantitative surveys also give these kinds of insights, but they use pre-determined questions and scales, privileging breadth over depth and often overlooking harder-to-reach participants.
In-depth interviews can take many different shapes and forms, often with more than one participant or researcher. For example, interviews might be highly structured (using an almost survey-like interview guide), entirely unstructured (taking a narrative and free-flowing approach) or semi-structured (using a topic guide ). Researchers might combine these approaches within a single project depending on the purpose of the interview and the characteristics of the participant. Whatever form the interview takes, researchers should be mindful of the dynamics between interviewer and participant and factor these in at all stages of the project.
In this Primer, we focus on the most common type of interview: one researcher taking a semi-structured approach to interviewing one participant using a topic guide. Focusing on how to plan research using interviews, we discuss the necessary stages of data collection. We also discuss the stages and thought-process behind analysing interview material to ensure that the richness and interpretability of interview material is maintained and communicated to readers. The Primer also tracks innovations in interview methods and discusses the developments we expect over the next 5–10 years.
We wrote this Primer as researchers from sociology, social policy and political science. We note our disciplinary background because we acknowledge that there are disciplinary differences in how interviews are approached and understood as a method.
Experimentation
Here we address research design considerations and data collection issues focusing on topic guide construction and other pragmatics of the interview. We also explore issues of ethics and reflexivity that are crucial throughout the research project.
Research design
Participant selection.
Participants can be selected and recruited in various ways for in-depth interview studies. The researcher must first decide what defines the people or social groups being studied. Often, this means moving from an abstract theoretical research question to a more precise empirical one. For example, the researcher might be interested in how people talk about race in contexts of diversity. Empirical settings in which this issue could be studied could include schools, workplaces or adoption agencies. The best research designs should clearly explain why the particular setting was chosen. Often there are both intrinsic and extrinsic reasons for choosing to study a particular group of people at a specific time and place 3 . Intrinsic motivations relate to the fact that the research is focused on an important specific social phenomenon that has been understudied. Extrinsic motivations speak to the broader theoretical research questions and explain why the case at hand is a good one through which to address them empirically.
Next, the researcher needs to decide which types of people they would like to interview. This decision amounts to delineating the inclusion and exclusion criteria for the study. The criteria might be based on demographic variables, like race or gender, but they may also be context-specific, for example, years of experience in an organization. These should be decided based on the research goals. Researchers should be clear about what characteristics would make an individual a candidate for inclusion in the study (and what would exclude them).
The next step is to identify and recruit the study’s sample . Usually, many more people fit the inclusion criteria than can be interviewed. In cases where lists of potential participants are available, the researcher might want to employ stratified sampling , dividing the list by characteristics of interest before sampling.
When there are no lists, researchers will often employ purposive sampling . Many researchers consider purposive sampling the most useful mode for interview-based research since the number of interviews to be conducted is too small to aim to be statistically representative 4 . Instead, the aim is not breadth, via representativeness, but depth via rich insights about a set of participants. In addition to purposive sampling, researchers often use snowball sampling . Both purposive and snowball sampling can be combined with quota sampling . All three types of sampling aim to ensure a variety of perspectives within the confines of a research project. A goal for in-depth interview studies can be to sample for range, being mindful of recruiting a diversity of participants fitting the inclusion criteria.
Study design
The total number of interviews depends on many factors, including the population studied, whether comparisons are to be made and the duration of interviews. Studies that rely on quota sampling where explicit comparisons are made between groups will require a larger number of interviews than studies focused on one group only. Studies where participants are interviewed over several hours, days or even repeatedly across years will tend to have fewer participants than those that entail a one-off engagement.
Researchers often stop interviewing when new interviews confirm findings from earlier interviews with no new or surprising insights (saturation) 4 , 5 , 6 . As a criterion for research design, saturation assumes that data collection and analysis are happening in tandem and that researchers will stop collecting new data once there is no new information emerging from the interviews. This is not always possible. Researchers rarely have time for systematic data analysis during data collection and they often need to specify their sample in funding proposals prior to data collection. As a result, researchers often draw on existing reports of saturation to estimate a sample size prior to data collection. These suggest between 12 and 20 interviews per category of participant (although researchers have reported saturation with samples that are both smaller and larger than this) 7 , 8 , 9 . The idea of saturation has been critiqued by many qualitative researchers because it assumes that meaning inheres in the data, waiting to be discovered — and confirmed — once saturation has been reached 7 . In-depth interview data are often multivalent and can give rise to different interpretations. The important consideration is, therefore, not merely how many participants are interviewed, but whether one’s research design allows for collecting rich and textured data that provide insight into participants’ understandings, accounts, perceptions and interpretations.
Sometimes, researchers will conduct interviews with more than one participant at a time. Researchers should consider the benefits and shortcomings of such an approach. Joint interviews may, for example, give researchers insight into how caregivers agree or debate childrearing decisions. At the same time, they may be less adaptive to exploring aspects of caregiving that participants may not wish to disclose to each other. In other cases, there may be more than one person interviewing each participant, such as when an interpreter is used, and so it is important to consider during the research design phase how this might shape the dynamics of the interview.
Data collection
Semi-structured interviews are typically organized around a topic guide comprised of an ordered set of broad topics (usually 3–5). Each topic includes a set of questions that form the basis of the discussion between the researcher and participant (Fig. 1 ). These topics are organized around key concepts that the researcher has identified (for example, through a close study of prior research, or perhaps through piloting a small, exploratory study) 5 .
a | Elaborated topics the researcher wants to cover in the interview and example questions. b | An example topic arc. Using such an arc, one can think flexibly about the order of topics. Considering the main question for each topic will help to determine the best order for the topics. After conducting some interviews, the researcher can move topics around if a different order seems to make sense.
Topic guide
One common way to structure a topic guide is to start with relatively easy, open-ended questions (Table 1 ). Opening questions should be related to the research topic but broad and easy to answer, so that they help to ease the participant into conversation.
After these broad, opening questions, the topic guide may move into topics that speak more directly to the overarching research question. The interview questions will be accompanied by probes designed to elicit concrete details and examples from the participant (see Table 1 ).
Abstract questions are often easier for participants to answer once they have been asked more concrete questions. In our experience, for example, questions about feelings can be difficult for some participants to answer, but when following probes concerning factual experiences these questions can become less challenging. After the main themes of the topic guide have been covered, the topic guide can move onto closing questions. At this stage, participants often repeat something they have said before, although they may sometimes introduce a new topic.
Interviews are especially well suited to gaining a deeper insight into people’s experiences. Getting these insights largely depends on the participants’ willingness to talk to the researcher. We recommend designing open-ended questions that are more likely to elicit an elaborated response and extended reflection from participants rather than questions that can be answered with yes or no.
Questions should avoid foreclosing the possibility that the participant might disagree with the premise of the question. Take for example the question: “Do you support the new family-friendly policies?” This question minimizes the possibility of the participant disagreeing with the premise of this question, which assumes that the policies are ‘family-friendly’ and asks for a yes or no answer. Instead, asking more broadly how a participant feels about the specific policy being described as ‘family-friendly’ (for example, a work-from-home policy) allows them to express agreement, disagreement or impartiality and, crucially, to explain their reasoning 10 .
For an uninterrupted interview that will last between 90 and 120 minutes, the topic guide should be one to two single-spaced pages with questions and probes. Ideally, the researcher will memorize the topic guide before embarking on the first interview. It is fine to carry a printed-out copy of the topic guide but memorizing the topic guide ahead of the interviews can often make the interviewer feel well prepared in guiding the participant through the interview process.
Although the topic guide helps the researcher stay on track with the broad areas they want to cover, there is no need for the researcher to feel tied down by the topic guide. For instance, if a participant brings up a theme that the researcher intended to discuss later or a point the researcher had not anticipated, the researcher may well decide to follow the lead of the participant. The researcher’s role extends beyond simply stating the questions; it entails listening and responding, making split-second decisions about what line of inquiry to pursue and allowing the interview to proceed in unexpected directions.
Optimizing the interview
The ideal place for an interview will depend on the study and what is feasible for participants. Generally, a place where the participant and researcher can both feel relaxed, where the interview can be uninterrupted and where noise or other distractions are limited is ideal. But this may not always be possible and so the researcher needs to be prepared to adapt their plans within what is feasible (and desirable for participants).
Another key tool for the interview is a recording device (assuming that permission for recording has been given). Recording can be important to capture what the participant says verbatim. Additionally, it can allow the researcher to focus on determining what probes and follow-up questions they want to pursue rather than focusing on taking notes. Sometimes, however, a participant may not allow the researcher to record, or the recording may fail. If the interview is not recorded we suggest that the researcher takes brief notes during the interview, if feasible, and then thoroughly make notes immediately after the interview and try to remember the participant’s facial expressions, gestures and tone of voice. Not having a recording of an interview need not limit the researcher from getting analytical value from it.
As soon as possible after each interview, we recommend that the researcher write a one-page interview memo comprising three key sections. The first section should identify two to three important moments from the interview. What constitutes important is up to the researcher’s discretion 9 . The researcher should note down what happened in these moments, including the participant’s facial expressions, gestures, tone of voice and maybe even the sensory details of their surroundings. This exercise is about capturing ethnographic detail from the interview. The second part of the interview memo is the analytical section with notes on how the interview fits in with previous interviews, for example, where the participant’s responses concur or diverge from other responses. The third part consists of a methodological section where the researcher notes their perception of their relationship with the participant. The interview memo allows the researcher to think critically about their positionality and practice reflexivity — key concepts for an ethical and transparent research practice in qualitative methodology 11 , 12 .
Ethics and reflexivity
All elements of an in-depth interview can raise ethical challenges and concerns. Good ethical practice in interview studies often means going beyond the ethical procedures mandated by institutions 13 . While discussions and requirements of ethics can differ across disciplines, here we focus on the most pertinent considerations for interviews across the research process for an interdisciplinary audience.
Ethical considerations prior to interview
Before conducting interviews, researchers should consider harm minimization, informed consent, anonymity and confidentiality, and reflexivity and positionality. It is important for the researcher to develop their own ethical sensitivities and sensibilities by gaining training in interview and qualitative methods, reading methodological and field-specific texts on interviews and ethics and discussing their research plans with colleagues.
Researchers should map the potential harm to consider how this can be minimized. Primarily, researchers should consider harm from the participants’ perspective (Box 1 ). But, it is also important to consider and plan for potential harm to the researcher, research assistants, gatekeepers, future researchers and members of the wider community 14 . Even the most banal of research topics can potentially pose some form of harm to the participant, researcher and others — and the level of harm is often highly context-dependent. For example, a research project on religion in society might have very different ethical considerations in a democratic versus authoritarian research context because of how openly or not such topics can be discussed and debated 15 .
The researcher should consider how they will obtain and record informed consent (for example, written or oral), based on what makes the most sense for their research project and context 16 . Some institutions might specify how informed consent should be gained. Regardless of how consent is obtained, the participant must be made aware of the form of consent, the intentions and procedures of the interview and potential forms of harm and benefit to the participant or community before the interview commences. Moreover, the participant must agree to be interviewed before the interview commences. If, in addition to interviews, the study contains an ethnographic component, it is worth reading around this topic (see, for example, Murphy and Dingwall 17 ). Informed consent must also be gained for how the interview will be recorded before the interview commences. These practices are important to ensure the participant is contributing on a voluntary basis. It is also important to remind participants that they can withdraw their consent at any time during the interview and for a specified period after the interview (to be decided with the participant). The researcher should indicate that participants can ask for anything shared to be off the record and/or not disseminated.
In terms of anonymity and confidentiality, it is standard practice when conducting interviews to agree not to use (or even collect) participants’ names and personal details that are not pertinent to the study. Anonymizing can often be the safer option for minimizing harm to participants as it is hard to foresee all the consequences of de-anonymizing, even if participants agree. Regardless of what a researcher decides, decisions around anonymity must be agreed with participants during the process of gaining informed consent and respected following the interview.
Although not all ethical challenges can be foreseen or planned for 18 , researchers should think carefully — before the interview — about power dynamics, participant vulnerability, emotional state and interactional dynamics between interviewer and participant, even when discussing low-risk topics. Researchers may then wish to plan for potential ethical issues, for example by preparing a list of relevant organizations to which participants can be signposted. A researcher interviewing a participant about debt, for instance, might prepare in advance a list of debt advice charities, organizations and helplines that could provide further support and advice. It is important to remember that the role of an interviewer is as a researcher rather than as a social worker or counsellor because researchers may not have relevant and requisite training in these other domains.
Box 1 Mapping potential forms of harm
Social: researchers should avoid causing any relational detriment to anyone in the course of interviews, for example, by sharing information with other participants or causing interview participants to be shunned or mistreated by their community as a result of participating.
Economic: researchers should avoid causing financial detriment to anyone, for example, by expecting them to pay for transport to be interviewed or to potentially lose their job as a result of participating.
Physical: researchers should minimize the risk of anyone being exposed to violence as a result of the research both from other individuals or from authorities, including police.
Psychological: researchers should minimize the risk of causing anyone trauma (or re-traumatization) or psychological anguish as a result of the research; this includes not only the participant but importantly the researcher themselves and anyone that might read or analyse the transcripts, should they contain triggering information.
Political: researchers should minimize the risk of anyone being exposed to political detriment as a result of the research, such as retribution.
Professional/reputational: researchers should minimize the potential for reputational damage to anyone connected to the research (this includes ensuring good research practices so that any researchers involved are not harmed reputationally by being involved with the research project).
The task here is not to map exhaustively the potential forms of harm that might pertain to a particular research project (that is the researcher’s job and they should have the expertise most suited to mapping such potential harms relative to the specific project) but to demonstrate the breadth of potential forms of harm.
Ethical considerations post-interview
Researchers should consider how interview data are stored, analysed and disseminated. If participants have been offered anonymity and confidentiality, data should be stored in a way that does not compromise this. For example, researchers should consider removing names and any other unnecessary personal details from interview transcripts, password-protecting and encrypting files and using pseudonyms to label and store all interview data. It is also important to address where interview data are taken (for example, across borders in particular where interview data might be of interest to local authorities) and how this might affect the storage of interview data.
Examining how the researcher will represent participants is a paramount ethical consideration both in the planning stages of the interview study and after it has been conducted. Dissemination strategies also need to consider questions of anonymity and representation. In small communities, even if participants are given pseudonyms, it might be obvious who is being described. Anonymizing not only the names of those participating but also the research context is therefore a standard practice 19 . With particularly sensitive data or insights about the participant, it is worth considering describing participants in a more abstract way rather than as specific individuals. These practices are important both for protecting participants’ anonymity but can also affect the ability of the researcher and others to return ethically to the research context and similar contexts 20 .
Reflexivity and positionality
Reflexivity and positionality mean considering the researcher’s role and assumptions in knowledge production 13 . A key part of reflexivity is considering the power relations between the researcher and participant within the interview setting, as well as how researchers might be perceived by participants. Further, researchers need to consider how their own identities shape the kind of knowledge and assumptions they bring to the interview, including how they approach and ask questions and their analysis of interviews (Box 2 ). Reflexivity is a necessary part of developing ethical sensibility as a researcher by adapting and reflecting on how one engages with participants. Participants should not feel judged, for example, when they share information that researchers might disagree with or find objectionable. How researchers deal with uncomfortable moments or information shared by participants is at their discretion, but they should consider how they will react both ahead of time and in the moment.
Researchers can develop their reflexivity by considering how they themselves would feel being asked these interview questions or represented in this way, and then adapting their practice accordingly. There might be situations where these questions are not appropriate in that they unduly centre the researchers’ experiences and worldview. Nevertheless, these prompts can provide a useful starting point for those beginning their reflexive journey and developing an ethical sensibility.
Reflexivity and ethical sensitivities require active reflection throughout the research process. For example, researchers should take care in interview memos and their notes to consider their assumptions, potential preconceptions, worldviews and own identities prior to and after interviews (Box 2 ). Checking in with assumptions can be a way of making sure that researchers are paying close attention to their own theoretical and analytical biases and revising them in accordance with what they learn through the interviews. Researchers should return to these notes (especially when analysing interview material), to try to unpack their own effects on the research process as well as how participants positioned and engaged with them.
Box 2 Aspects to reflect on reflexively
For reflexive engagement, and understanding the power relations being co-constructed and (re)produced in interviews, it is necessary to reflect, at a minimum, on the following.
Ethnicity, race and nationality, such as how does privilege stemming from race or nationality operate between the researcher, the participant and research context (for example, a researcher from a majority community may be interviewing a member of a minority community)
Gender and sexuality, see above on ethnicity, race and nationality
Social class, and in particular the issue of middle-class bias among researchers when formulating research and interview questions
Economic security/precarity, see above on social class and thinking about the researcher’s relative privilege and the source of biases that stem from this
Educational experiences and privileges, see above
Disciplinary biases, such as how the researcher’s discipline/subfield usually approaches these questions, possibly normalizing certain assumptions that might be contested by participants and in the research context
Political and social values
Lived experiences and other dimensions of ourselves that affect and construct our identity as researchers
In this section, we discuss the next stage of an interview study, namely, analysing the interview data. Data analysis may begin while more data are being collected. Doing so allows early findings to inform the focus of further data collection, as part of an iterative process across the research project. Here, the researcher is ultimately working towards achieving coherence between the data collected and the findings produced to answer successfully the research question(s) they have set.
The two most common methods used to analyse interview material across the social sciences are thematic analysis 21 and discourse analysis 22 . Thematic analysis is a particularly useful and accessible method for those starting out in analysis of qualitative data and interview material as a method of coding data to develop and interpret themes in the data 21 . Discourse analysis is more specialized and focuses on the role of discourse in society by paying close attention to the explicit, implicit and taken-for-granted dimensions of language and power 22 , 23 . Although thematic and discourse analysis are often discussed as separate techniques, in practice researchers might flexibly combine these approaches depending on the object of analysis. For example, those intending to use discourse analysis might first conduct thematic analysis as a way to organize and systematize the data. The object and intention of analysis might differ (for example, developing themes or interrogating language), but the questions facing the researcher (such as whether to take an inductive or deductive approach to analysis) are similar.
Preparing data
Data preparation is an important step in the data analysis process. The researcher should first determine what comprises the corpus of material and in what form it will it be analysed. The former refers to whether, for example, alongside the interviews themselves, analytic memos or observational notes that may have been taken during data collection will also be directly analysed. The latter refers to decisions about how the verbal/audio interview data will be transformed into a written form, making it suitable for processes of data analysis. Typically, interview audio recordings are transcribed to produce a written transcript. It is important to note that the process of transcription is one of transformation. The verbal interview data are transformed into a written transcript through a series of decisions that the researcher must make. The researcher should consider the effect of mishearing what has been said or how choosing to punctuate a sentence in a particular way will affect the final analysis.
Box 3 shows an example transcript excerpt from an interview with a teacher conducted by Teeger as part of her study of history education in post-apartheid South Africa 24 (Box 3 ). Seeing both the questions and the responses means that the reader can contextualize what the participant (Ms Mokoena) has said. Throughout the transcript the researcher has used square brackets, for example to indicate a pause in speech, when Ms Mokoena says “it’s [pause] it’s a difficult topic”. The transcription choice made here means that we see that Ms Mokoena has taken time to pause, perhaps to search for the right words, or perhaps because she has a slight apprehension. Square brackets are also included as an overt act of communication to the reader. When Ms Mokoena says “ja”, the English translation (“yes”) of the word in Afrikaans is placed in square brackets to ensure that the reader can follow the meaning of the speech.
Decisions about what to include when transcribing will be hugely important for the direction and possibilities of analysis. Researchers should decide what they want to capture in the transcript, based on their analytic focus. From a (post)positivist perspective 25 , the researcher may be interested in the manifest content of the interview (such as what is said, not how it is said). In that case, they may choose to transcribe intelligent verbatim . From a constructivist perspective 25 , researchers may choose to record more aspects of speech (including, for example, pauses, repetitions, false starts, talking over one another) so that these features can be analysed. Those working from this perspective argue that to recognize the interactional nature of the interview setting adequately and to avoid misinterpretations, features of interaction (pauses, overlaps between speakers and so on) should be preserved in transcription and therefore in the analysis 10 . Readers interested in learning more should consult Potter and Hepburn’s summary of how to present interaction through transcription of interview data 26 .
The process of analysing semi-structured interviews might be thought of as a generative rather than an extractive enterprise. Findings do not already exist within the interview data to be discovered. Rather, researchers create something new when analysing the data by applying their analytic lens or approach to the transcripts. At a high level, there are options as to what researchers might want to glean from their interview data. They might be interested in themes, whereby they identify patterns of meaning across the dataset 21 . Alternatively, they may focus on discourse(s), looking to identify how language is used to construct meanings and therefore how language reinforces or produces aspects of the social world 27 . Alternatively, they might look at the data to understand narrative or biographical elements 28 .
A further overarching decision to make is the extent to which researchers bring predetermined framings or understandings to bear on their data, or instead begin from the data themselves to generate an analysis. One way of articulating this is the extent to which researchers take a deductive approach or an inductive approach to analysis. One example of a truly inductive approach is grounded theory, whereby the aim of the analysis is to build new theory, beginning with one’s data 6 , 29 . In practice, researchers using thematic and discourse analysis often combine deductive and inductive logics and describe their process instead as iterative (referred to also as an abductive approach ) 30 , 31 . For example, researchers may decide that they will apply a given theoretical framing, or begin with an initial analytic framework, but then refine or develop these once they begin the process of analysis.
Box 3 Excerpt of interview transcript (from Teeger 24 )
Interviewer : Maybe you could just start by talking about what it’s like to teach apartheid history.
Ms Mokoena : It’s a bit challenging. You’ve got to accommodate all the kids in the class. You’ve got to be sensitive to all the racial differences. You want to emphasize the wrongs that were done in the past but you also want to, you know, not to make kids feel like it’s their fault. So you want to use the wrongs of the past to try and unite the kids …
Interviewer : So what kind of things do you do?
Ms Mokoena : Well I normally highlight the fact that people that were struggling were not just the blacks, it was all the races. And I give examples of the people … from all walks of life, all races, and highlight how they suffered as well as a result of apartheid, particularly the whites… . What I noticed, particularly my first year of teaching apartheid, I noticed that the black kids made the others feel responsible for what happened… . I had a lot of fights…. A lot of kids started hating each other because, you know, the others are white and the others were black. And they started saying, “My mother is a domestic worker because she was never allowed an opportunity to get good education.” …
Interviewer : I didn’t see any of that now when I was observing.
Ms Mokoena : … Like I was saying I think that because of the re-emphasis of the fact that, look, everybody did suffer one way or the other, they sort of got to see that it was everybody’s struggle … . They should now get to understand that that’s why we’re called a Rainbow Nation. Not everybody agreed with apartheid and not everybody suffered. Even all the blacks, not all blacks got to feel what the others felt . So ja [yes], it’s [pause] it’s a difficult topic, ja . But I think if you get the kids to understand why we’re teaching apartheid in the first place and you show the involvement of all races in all the different sides , then I think you have managed to teach it properly. So I think because of my inexperience then — that was my first year of teaching history — so I think I — maybe I over-emphasized the suffering of the blacks versus the whites [emphasis added].
Reprinted with permission from ref. 24 , Sage Publications.
From data to codes
Coding data is a key building block shared across many approaches to data analysis. Coding is a way of organizing and describing data, but is also ultimately a way of transforming data to produce analytic insights. The basic practice of coding involves highlighting a segment of text (this may be a sentence, a clause or a longer excerpt) and assigning a label to it. The aim of the label is to communicate some sort of summary of what is in the highlighted piece of text. Coding is an iterative process, whereby researchers read and reread their transcripts, applying and refining their codes, until they have a coding frame (a set of codes) that is applied coherently across the dataset and that captures and communicates the key features of what is contained in the data as it relates to the researchers’ analytic focus.
What one codes for is entirely contingent on the focus of the research project and the choices the researcher makes about the approach to analysis. At first, one might apply descriptive codes, summarizing what is contained in the interviews. It is rarely desirable to stop at this point, however, because coding is a tool to move from describing the data to interpreting the data. Suppose the researcher is pursuing some version of thematic analysis. In that case, it might be that the objects of coding are aspects of reported action, emotions, opinions, norms, relationships, routines, agreement/disagreement and change over time. A discourse analysis might instead code for different types of speech acts, tropes, linguistic or rhetorical devices. Multiple types of code might be generated within the same research project. What is important is that researchers are aware of the choices they are making in terms of what they are coding for. Moreover, through the process of refinement, the aim is to produce a set of discrete codes — in which codes are conceptually distinct, as opposed to overlapping. By using the same codes across the dataset, the researcher can capture commonalities across the interviews. This process of refinement involves relabelling codes and reorganizing how and where they are applied in the dataset.
From coding to analysis and writing
Data analysis is also an iterative process in which researchers move closer to and further away from the data. As they move away from the data, they synthesize their findings, thus honing and articulating their analytic insights. As they move closer to the data, they ground these insights in what is contained in the interviews. The link should not be broken between the data themselves and higher-order conceptual insights or claims being made. Researchers must be able to show evidence for their claims in the data. Figure 2 summarizes this iterative process and suggests the sorts of activities involved at each stage more concretely.
As well as going through steps 1 to 6 in order, the researcher will also go backwards and forwards between stages. Some stages will themselves be a forwards and backwards processing of coding and refining when working across different interview transcripts.
At the stage of synthesizing, there are some common quandaries. When dealing with a dataset consisting of multiple interviews, there will be salient and minority statements across different participants, or consensus or dissent on topics of interest to the researcher. A strength of qualitative interviews is that we can build in these nuances and variations across our data as opposed to aggregating them away. When exploring and reporting data, researchers should be asking how different findings are patterned and which interviews contain which codes, themes or tropes. Researchers should think about how these variations fit within the longer flow of individual interviews and what these variations tell them about the nature of their substantive research interests.
A further consideration is how to approach analysis within and across interview data. Researchers may look at one individual code, to examine the forms it takes across different participants and what they might be able to summarize about this code in the round. Alternatively, they might look at how a code or set of codes pattern across the account of one participant, to understand the code(s) in a more contextualized way. Further analysis might be done according to different sampling characteristics, where researchers group together interviews based on certain demographic characteristics and explore these together.
When it comes to writing up and presenting interview data, key considerations tend to rest on what is often termed transparency. When presenting the findings of an interview-based study, the reader should be able to understand and trace what the stated findings are based upon. This process typically involves describing the analytic process, how key decisions were made and presenting direct excerpts from the data. It is important to account for how the interview was set up and to consider the active part that the researcher has played in generating the data 32 . Quotes from interviews should not be thought of as merely embellishing or adding interest to a final research output. Rather, quotes serve the important function of connecting the reader directly to the underlying data. Quotes, therefore, should be chosen because they provide the reader with the most apt insight into what is being discussed. It is good practice to report not just on what participants said, but also on the questions that were asked to elicit the responses.
Researchers have increasingly used specialist qualitative data analysis software to organize and analyse their interview data, such as NVivo or ATLAS.ti. It is important to remember that such software is a tool for, rather than an approach or technique of, analysis. That said, software also creates a wide range of possibilities in terms of what can be done with the data. As researchers, we should reflect on how the range of possibilities of a given software package might be shaping our analytical choices and whether these are choices that we do indeed want to make.
Applications
This section reviews how and why in-depth interviews have been used by researchers studying gender, education and inequality, nationalism and ethnicity and the welfare state. Although interviews can be employed as a method of data collection in just about any social science topic, the applications below speak directly to the authors’ expertise and cutting-edge areas of research.
When it comes to the broad study of gender, in-depth interviews have been invaluable in shaping our understanding of how gender functions in everyday life. In a study of the US hedge fund industry (an industry dominated by white men), Tobias Neely was interested in understanding the factors that enable white men to prosper in the industry 33 . The study comprised interviews with 45 hedge fund workers and oversampled women of all races and men of colour to capture a range of experiences and beliefs. Tobias Neely found that practices of hiring, grooming and seeding are key to maintaining white men’s dominance in the industry. In terms of hiring, the interviews clarified that white men in charge typically preferred to hire people like themselves, usually from their extended networks. When women were hired, they were usually hired to less lucrative positions. In terms of grooming, Tobias Neely identifies how older and more senior men in the industry who have power and status will select one or several younger men as their protégés, to include in their own elite networks. Finally, in terms of her concept of seeding, Tobias Neely describes how older men who are hedge fund managers provide the seed money (often in the hundreds of millions of dollars) for a hedge fund to men, often their own sons (but not their daughters). These interviews provided an in-depth look into gendered and racialized mechanisms that allow white men to flourish in this industry.
Research by Rao draws on dozens of interviews with men and women who had lost their jobs, some of the participants’ spouses and follow-up interviews with about half the sample approximately 6 months after the initial interview 34 . Rao used interviews to understand the gendered experience and understanding of unemployment. Through these interviews, she found that the very process of losing their jobs meant different things for men and women. Women often saw job loss as being a personal indictment of their professional capabilities. The women interviewed often referenced how years of devaluation in the workplace coloured their interpretation of their job loss. Men, by contrast, were also saddened by their job loss, but they saw it as part and parcel of a weak economy rather than a personal failing. How these varied interpretations occurred was tied to men’s and women’s very different experiences in the workplace. Further, through her analysis of these interviews, Rao also showed how these gendered interpretations had implications for the kinds of jobs men and women sought to pursue after job loss. Whereas men remained tied to participating in full-time paid work, job loss appeared to be a catalyst pushing some of the women to re-evaluate their ties to the labour force.
In a study of workers in the tech industry, Hart used interviews to explain how individuals respond to unwanted and ambiguously sexual interactions 35 . Here, the researcher used interviews to allow participants to describe how these interactions made them feel and act and the logics of how they interpreted, classified and made sense of them 35 . Through her analysis of these interviews, Hart showed that participants engaged in a process she termed “trajectory guarding”, whereby they sought to monitor unwanted and ambiguously sexual interactions to avoid them from escalating. Yet, as Hart’s analysis proficiently demonstrates, these very strategies — which protect these workers sexually — also undermined their workplace advancement.
Drawing on interviews, these studies have helped us to understand better how gendered mechanisms, gendered interpretations and gendered interactions foster gender inequality when it comes to paid work. Methodologically, these studies illuminate the power of interviews to reveal important aspects of social life.
Nationalism and ethnicity
Traditionally, nationalism has been studied from a top-down perspective, through the lens of the state or using historical methods; in other words, in-depth interviews have not been a common way of collecting data to study nationalism. The methodological turn towards everyday nationalism has encouraged more scholars to go to the field and use interviews (and ethnography) to understand nationalism from the bottom up: how people talk about, give meaning, understand, navigate and contest their relation to nation, national identification and nationalism 36 , 37 , 38 , 39 . This turn has also addressed the gap left by those studying national and ethnic identification via quantitative methods, such as surveys.
Surveys can enumerate how individuals ascribe to categorical forms of identification 40 . However, interviews can question the usefulness of such categories and ask whether these categories are reflected, or resisted, by participants in terms of the meanings they give to identification 41 , 42 . Categories often pitch identification as a mutually exclusive choice; but identification might be more complex than such categories allow. For example, some might hybridize these categories or see themselves as moving between and across categories 43 . Hearing how people talk about themselves and their relation to nations, states and ethnicities, therefore, contributes substantially to the study of nationalism and national and ethnic forms of identification.
One particular approach to studying these topics, whether via everyday nationalism or alternatives, is that of using interviews to capture both articulations and narratives of identification, relations to nationalism and the boundaries people construct. For example, interviews can be used to gather self–other narratives by studying how individuals construct I–we–them boundaries 44 , including how participants talk about themselves, who participants include in their various ‘we’ groupings and which and how participants create ‘them’ groupings of others, inserting boundaries between ‘I/we’ and ‘them’. Overall, interviews hold great potential for listening to participants and understanding the nuances of identification and the construction of boundaries from their point of view.
Education and inequality
Scholars of social stratification have long noted that the school system often reproduces existing social inequalities. Carter explains that all schools have both material and sociocultural resources 45 . When children from different backgrounds attend schools with different material resources, their educational and occupational outcomes are likely to vary. Such material resources are relatively easy to measure. They are operationalized as teacher-to-student ratios, access to computers and textbooks and the physical infrastructure of classrooms and playgrounds.
Drawing on Bourdieusian theory 46 , Carter conceptualizes the sociocultural context as the norms, values and dispositions privileged within a social space 45 . Scholars have drawn on interviews with students and teachers (as well as ethnographic observations) to show how schools confer advantages on students from middle-class families, for example, by rewarding their help-seeking behaviours 47 . Focusing on race, researchers have revealed how schools can remain socioculturally white even as they enrol a racially diverse student population. In such contexts, for example, teachers often misrecognize the aesthetic choices made by students of colour, wrongly inferring that these students’ tastes in clothing and music reflect negative orientations to schooling 48 , 49 , 50 . These assessments can result in disparate forms of discipline and may ultimately shape educators’ assessments of students’ academic potential 51 .
Further, teachers and administrators tend to view the appropriate relationship between home and school in ways that resonate with white middle-class parents 52 . These parents are then able to advocate effectively for their children in ways that non-white parents are not 53 . In-depth interviews are particularly good at tapping into these understandings, revealing the mechanisms that confer privilege on certain groups of students and thereby reproduce inequality.
In addition, interviews can shed light on the unequal experiences that young people have within educational institutions, as the views of dominant groups are affirmed while those from disadvantaged backgrounds are delegitimized. For example, Teeger’s interviews with South African high schoolers showed how — because racially charged incidents are often framed as jokes in the broader school culture — Black students often feel compelled to ignore and keep silent about the racism they experience 54 . Interviews revealed that Black students who objected to these supposed jokes were coded by other students as serious or angry. In trying to avoid such labels, these students found themselves unable to challenge the racism they experienced. Interviews give us insight into these dynamics and help us see how young people understand and interpret the messages transmitted in schools — including those that speak to issues of inequality in their local school contexts as well as in society more broadly 24 , 55 .
The welfare state
In-depth interviews have also proved to be an important method for studying various aspects of the welfare state. By welfare state, we mean the social institutions relating to the economic and social wellbeing of a state’s citizens. Notably, using interviews has been useful to look at how policy design features are experienced and play out on the ground. Interviews have often been paired with large-scale surveys to produce mixed-methods study designs, therefore achieving both breadth and depth of insights.
In-depth interviews provide the opportunity to look behind policy assumptions or how policies are designed from the top down, to examine how these play out in the lives of those affected by the policies and whose experiences might otherwise be obscured or ignored. For example, the Welfare Conditionality project used interviews to critique the assumptions that conditionality (such as, the withdrawal of social security benefits if recipients did not perform or meet certain criteria) improved employment outcomes and instead showed that conditionality was harmful to mental health, living standards and had many other negative consequences 56 . Meanwhile, combining datasets from two small-scale interview studies with recipients allowed Summers and Young to critique assumptions around the simplicity that underpinned the design of Universal Credit in 2020, for example, showing that the apparently simple monthly payment design instead burdened recipients with additional money management decisions and responsibilities 57 .
Similarly, the Welfare at a (Social) Distance project used a mixed-methods approach in a large-scale study that combined national surveys with case studies and in-depth interviews to investigate the experience of claiming social security benefits during the COVID-19 pandemic. The interviews allowed researchers to understand in detail any issues experienced by recipients of benefits, such as delays in the process of claiming, managing on a very tight budget and navigating stigma and claiming 58 .
These applications demonstrate the multi-faceted topics and questions for which interviews can be a relevant method for data collection. These applications highlight not only the relevance of interviews, but also emphasize the key added value of interviews, which might be missed by other methods (surveys, in particular). Interviews can expose and question what is taken for granted and directly engage with communities and participants that might otherwise be ignored, obscured or marginalized.
Reproducibility and data deposition
There is a robust, ongoing debate about reproducibility in qualitative research, including interview studies. In some research paradigms, reproducibility can be a way of interrogating the rigour and robustness of research claims, by seeing whether these hold up when the research process is repeated. Some scholars have suggested that although reproducibility may be challenging, researchers can facilitate it by naming the place where the research was conducted, naming participants, sharing interview and fieldwork transcripts (anonymized and de-identified in cases where researchers are not naming people or places) and employing fact-checkers for accuracy 11 , 59 , 60 .
In addition to the ethical concerns of whether de-anonymization is ever feasible or desirable, it is also important to address whether the replicability of interview studies is meaningful. For example, the flexibility of interviews allows for the unexpected and the unforeseen to be incorporated into the scope of the research 61 . However, this flexibility means that we cannot expect reproducibility in the conventional sense, given that different researchers will elicit different types of data from participants. Sharing interview transcripts with other researchers, for instance, downplays the contextual nature of an interview.
Drawing on Bauer and Gaskell, we propose several measures to enhance rigour in qualitative research: transparency, grounding interpretations and aiming for theoretical transferability and significance 62 .
Researchers should be transparent when describing their methodological choices. Transparency means documenting who was interviewed, where and when (without requiring de-anonymization, for example, by documenting their characteristics), as well as the questions they were asked. It means carefully considering who was left out of the interviews and what that could mean for the researcher’s findings. It also means carefully considering who the researcher is and how their identity shaped the research process (integrating and articulating reflexivity into whatever is written up).
Second, researchers should ground their interpretations in the data. Grounding means presenting the evidence upon which the interpretation relies. Quotes and extracts should be extensive enough to allow the reader to evaluate whether the researcher’s interpretations are grounded in the data. At each step, researchers should carefully compare their own explanations and interpretations with alternative explanations. Doing so systematically and frequently allows researchers to become more confident in their claims. Here, researchers should justify the link between data and analysis by using quotes to justify and demonstrate the analytical point, while making sure the analytical point offers an interpretation of quotes (Box 4 ).
An important step in considering alternative explanations is to seek out disconfirming evidence 4 , 63 . This involves looking for instances where participants deviate from what the majority are saying and thus bring into question the theory (or explanation) that the researcher is developing. Careful analysis of such examples can often demonstrate the salience and meaning of what appears to be the norm (see Table 2 for examples) 54 . Considering alternative explanations and paying attention to disconfirming evidence allows the researcher to refine their own theories in respect of the data.
Finally, researchers should aim for theoretical transferability and significance in their discussions of findings. One way to think about this is to imagine someone who is not interested in the empirical study. Articulating theoretical transferability and significance usually takes the form of broadening out from the specific findings to consider explicitly how the research has refined or altered prior theoretical approaches. This process also means considering under what other conditions, aside from those of the study, the researcher thinks their theoretical revision would be supported by and why. Importantly, it also includes thinking about the limitations of one’s own approach and where the theoretical implications of the study might not hold.
Box 4 An example of grounding interpretations in data (from Rao 34 )
In an article explaining how unemployed men frame their job loss as a pervasive experience, Rao writes the following: “Unemployed men in this study understood unemployment to be an expected aspect of paid work in the contemporary United States. Robert, a white unemployed communications professional, compared the economic landscape after the Great Recession with the tragic events of September 11, 2001:
Part of your post-9/11 world was knowing people that died as a result of terrorism. The same thing is true with the [Great] Recession, right? … After the Recession you know somebody who was unemployed … People that really should be working.
The pervasiveness of unemployment rendered it normal, as Robert indicates.”
Here, the link between the quote presented and the analytical point Rao is making is clear: the analytical point is grounded in a quote and an interpretation of the quote is offered 34 .
Limitations and optimizations
When deciding which research method to use, the key question is whether the method provides a good fit for the research questions posed. In other words, researchers should consider whether interviews will allow them to successfully access the social phenomena necessary to answer their question(s) and whether the interviews will do so more effectively than other methods. Table 3 summarizes the major strengths and limitations of interviews. However, the accompanying text below is organized around some key issues, where relative strengths and weaknesses are presented alongside each other, the aim being that readers should think about how these can be balanced and optimized in relation to their own research.
Breadth versus depth of insight
Achieving an overall breadth of insight, in a statistically representative sense, is not something that is possible or indeed desirable when conducting in-depth interviews. Instead, the strength of conducting interviews lies in their ability to generate various sorts of depth of insight. The experiences or views of participants that can be accessed by conducting interviews help us to understand participants’ subjective realities. The challenge, therefore, is for researchers to be clear about why depth of insight is the focus and what we should aim to glean from these types of insight.
Naturalistic or artificial interviews
Interviews make use of a form of interaction with which people are familiar 64 . By replicating a naturalistic form of interaction as a tool to gather social science data, researchers can capitalize on people’s familiarity and expectations of what happens in a conversation. This familiarity can also be a challenge, as people come to the interview with preconceived ideas about what this conversation might be for or about. People may draw on experiences of other similar conversations when taking part in a research interview (for example, job interviews, therapy sessions, confessional conversations, chats with friends). Researchers should be aware of such potential overlaps and think through their implications both in how the aims and purposes of the research interview are communicated to participants and in how interview data are interpreted.
Further, some argue that a limitation of interviews is that they are an artificial form of data collection. By taking people out of their daily lives and asking them to stand back and pass comment, we are creating a distance that makes it difficult to use such data to say something meaningful about people’s actions, experiences and views. Other approaches, such as ethnography, might be more suitable for tapping into what people actually do, as opposed to what they say they do 65 .
Dynamism and replicability
Interviews following a semi-structured format offer flexibility both to the researcher and the participant. As the conversation develops, the interlocutors can explore the topics raised in much more detail, if desired, or pass over ones that are not relevant. This flexibility allows for the unexpected and the unforeseen to be incorporated into the scope of the research.
However, this flexibility has a related challenge of replicability. Interviews cannot be reproduced because they are contingent upon the interaction between the researcher and the participant in that given moment of interaction. In some research paradigms, replicability can be a way of interrogating the robustness of research claims, by seeing whether they hold when they are repeated. This is not a useful framework to bring to in-depth interviews and instead quality criteria (such as transparency) tend to be employed as criteria of rigour.
Accessing the private and personal
Interviews have been recognized for their strength in accessing private, personal issues, which participants may feel more comfortable talking about in a one-to-one conversation. Furthermore, interviews are likely to take a more personable form with their extended questions and answers, perhaps making a participant feel more at ease when discussing sensitive topics in such a context. There is a similar, but separate, argument made about accessing what are sometimes referred to as vulnerable groups, who may be difficult to make contact with using other research methods.
There is an associated challenge of anonymity. There can be types of in-depth interview that make it particularly challenging to protect the identities of participants, such as interviewing within a small community, or multiple members of the same household. The challenge to ensure anonymity in such contexts is even more important and difficult when the topic of research is of a sensitive nature or participants are vulnerable.
Increasingly, researchers are collaborating in large-scale interview-based studies and integrating interviews into broader mixed-methods designs. At the same time, interviews can be seen as an old-fashioned (and perhaps outdated) mode of data collection. We review these debates and discussions and point to innovations in interview-based studies. These include the shift from face-to-face interviews to the use of online platforms, as well as integrating and adapting interviews towards more inclusive methodologies.
Collaborating and mixing
Qualitative researchers have long worked alone 66 . Increasingly, however, researchers are collaborating with others for reasons such as efficiency, institutional incentives (for example, funding for collaborative research) and a desire to pool expertise (for example, studying similar phenomena in different contexts 67 or via different methods). Collaboration can occur across disciplines and methods, cases and contexts and between industry/business, practitioners and researchers. In many settings and contexts, collaboration has become an imperative 68 .
Cheek notes how collaboration provides both advantages and disadvantages 68 . For example, collaboration can be advantageous, saving time and building on the divergent knowledge, skills and resources of different researchers. Scholars with different theoretical or case-based knowledge (or contacts) can work together to build research that is comparative and/or more than the sum of its parts. But such endeavours also carry with them practical and political challenges in terms of how resources might actually be pooled, shared or accounted for. When undertaking such projects, as Morse notes, it is worth thinking about the nature of the collaboration and being explicit about such a choice, its advantages and its disadvantages 66 .
A further tension, but also a motivation for collaboration, stems from integrating interviews as a method in a mixed-methods project, whether with other qualitative researchers (to combine with, for example, focus groups, document analysis or ethnography) or with quantitative researchers (to combine with, for example, surveys, social media analysis or big data analysis). Cheek and Morse both note the pitfalls of collaboration with quantitative researchers: that quality of research may be sacrificed, qualitative interpretations watered down or not taken seriously, or tensions experienced over the pace and different assumptions that come with different methods and approaches of research 66 , 68 .
At the same time, there can be real benefits of such mixed-methods collaboration, such as reaching different and more diverse audiences or testing assumptions and theories between research components in the same project (for example, testing insights from prior quantitative research via interviews, or vice versa), as long as the skillsets of collaborators are seen as equally beneficial to the project. Cheek provides a set of questions that, as a starting point, can be useful for guiding collaboration, whether mixed methods or otherwise. First, Cheek advises asking all collaborators about their assumptions and understandings concerning collaboration. Second, Cheek recommends discussing what each perspective highlights and focuses on (and conversely ignores or sidelines) 68 .
A different way to engage with the idea of collaboration and mixed methods research is by fostering greater collaboration between researchers in the Global South and Global North, thus reversing trends of researchers from the Global North extracting knowledge from the Global South 69 . Such forms of collaboration also align with interview innovations, discussed below, that seek to transform traditional interview approaches into more participatory and inclusive (as part of participatory methodologies).
Digital innovations and challenges
The ongoing COVID-19 pandemic has centred the question of technology within interview-based fieldwork. Although conducting synchronous oral interviews online — for example, via Zoom, Skype or other such platforms — has been a method used by a small constituency of researchers for many years, it became (and remains) a necessity for many researchers wanting to continue or start interview-based projects while COVID-19 prevents face-to-face data collection.
In the past, online interviews were often framed as an inferior form of data collection for not providing the kinds of (often necessary) insights and forms of immersion face-to-face interviews allow 70 , 71 . Online interviews do tend to be more decontextualized than interviews conducted face-to-face 72 . For example, it is harder to recognize, engage with and respond to non-verbal cues 71 . At the same time, they broaden participation to those who might not have been able to access or travel to sites where interviews would have been conducted otherwise, for example people with disabilities. Online interviews also offer more flexibility in terms of scheduling and time requirements. For example, they provide more flexibility around precarious employment or caring responsibilities without having to travel and be away from home. In addition, online interviews might also reduce discomfort between researchers and participants, compared with face-to-face interviews, enabling more discussion of sensitive material 71 . They can also provide participants with more control, enabling them to turn on and off the microphone and video as they choose, for example, to provide more time to reflect and disconnect if they so wish 72 .
That said, online interviews can also introduce new biases based on access to technology 72 . For example, in the Global South, there are often urban/rural and gender gaps between who has access to mobile phones and who does not, meaning that some population groups might be overlooked unless researchers sample mindfully 71 . There are also important ethical considerations when deciding between online and face-to-face interviews. Online interviews might seem to imply lower ethical risks than face-to-face interviews (for example, they lower the chances of identification of participants or researchers), but they also offer more barriers to building trust between researchers and participants 72 . Interacting only online with participants might not provide the information needed to assess risk, for example, participants’ access to a private space to speak 71 . Just because online interviews might be more likely to be conducted in private spaces does not mean that private spaces are safe, for example, for victims of domestic violence. Finally, online interviews prompt further questions about decolonizing research and engaging with participants if research is conducted from afar 72 , such as how to include participants meaningfully and challenge dominant assumptions while doing so remotely.
A further digital innovation, modulating how researchers conduct interviews and the kinds of data collected and analysed, stems from the use and integration of (new) technology, such as WhatsApp text or voice notes to conduct synchronous or asynchronous oral or written interviews 73 . Such methods can provide more privacy, comfort and control to participants and make recruitment easier, allowing participants to share what they want when they want to, using technology that already forms a part of their daily lives, especially for young people 74 , 75 . Such technology is also emerging in other qualitative methods, such as focus groups, with similar arguments around greater inclusivity versus traditional offline modes. Here, the digital challenge might be higher for researchers than for participants if they are less used to such technology 75 . And while there might be concerns about the richness, depth and quality of written messages as a form of interview data, Gibson reports that the reams of transcripts that resulted from a study using written messaging were dense with meaning to be analysed 75 .
Like with online and face-to-face interviews, it is important also to consider the ethical questions and challenges of using such technology, from gaining consent to ensuring participant safety and attending to their distress, without cues, like crying, that might be more obvious in a face-to-face setting 75 , 76 . Attention to the platform used for such interviews is also important and researchers should be attuned to the local and national context. For example, in China, many platforms are neither legal nor available 76 . There, more popular platforms — like WeChat — can be highly monitored by the government, posing potential risks to participants depending on the topic of the interview. Ultimately, researchers should consider trade-offs between online and offline interview modalities, being attentive to the social context and power dynamics involved.
The next 5–10 years
Continuing to integrate (ethically) this technology will be among the major persisting developments in interview-based research, whether to offer more flexibility to researchers or participants, or to diversify who can participate and on what terms.
Pushing the idea of inclusion even further is the potential for integrating interview-based studies within participatory methods, which are also innovating via integrating technology. There is no hard and fast line between researchers using in-depth interviews and participatory methods; many who employ participatory methods will use interviews at the beginning, middle or end phases of a research project to capture insights, perspectives and reflections from participants 77 , 78 . Participatory methods emphasize the need to resist existing power and knowledge structures. They broaden who has the right and ability to contribute to academic knowledge by including and incorporating participants not only as subjects of data collection, but as crucial voices in research design and data analysis 77 . Participatory methods also seek to facilitate local change and to produce research materials, whether for academic or non-academic audiences, including films and documentaries, in collaboration with participants.
In responding to the challenges of COVID-19, capturing the fraught situation wrought by the pandemic and the momentum to integrate technology, participatory researchers have sought to continue data collection from afar. For example, Marzi has adapted an existing project to co-produce participatory videos, via participants’ smartphones in Medellin, Colombia, alongside regular check-in conversations/meetings/interviews with participants 79 . Integrating participatory methods into interview studies offers a route by which researchers can respond to the challenge of diversifying knowledge, challenging assumptions and power hierarchies and creating more inclusive and collaborative partnerships between participants and researchers in the Global North and South.
Brinkmann, S. & Kvale, S. Doing Interviews Vol. 2 (Sage, 2018). This book offers a good general introduction to the practice and design of interview-based studies.
Silverman, D. A Very Short, Fairly Interesting And Reasonably Cheap Book About Qualitative Research (Sage, 2017).
Yin, R. K. Case Study Research And Applications: Design And Methods (Sage, 2018).
Small, M. L. How many cases do I need?’ On science and the logic of case selection in field-based research. Ethnography 10 , 5–38 (2009). This article convincingly demonstrates how the logic of qualitative research differs from quantitative research and its goal of representativeness.
Google Scholar
Gerson, K. & Damaske, S. The Science and Art of Interviewing (Oxford Univ. Press, 2020).
Glaser, B. G. & Strauss, A. L. The Discovery Of Grounded Theory: Strategies For Qualitative Research (Aldine, 1967).
Braun, V. & Clarke, V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual. Res. Sport Exerc. Health 13 , 201–216 (2021).
Guest, G., Bunce, A. & Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 18 , 59–82 (2006).
Vasileiou, K., Barnett, J., Thorpe, S. & Young, T. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med. Res. Methodol. 18 , 148 (2018).
Silverman, D. How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 , 144–158 (2017).
Jerolmack, C. & Murphy, A. The ethical dilemmas and social scientific tradeoffs of masking in ethnography. Sociol. Methods Res. 48 , 801–827 (2019).
MathSciNet Google Scholar
Reyes, V. Ethnographic toolkit: strategic positionality and researchers’ visible and invisible tools in field research. Ethnography 21 , 220–240 (2020).
Guillemin, M. & Gillam, L. Ethics, reflexivity and “ethically important moments” in research. Qual. Inq. 10 , 261–280 (2004).
Summers, K. For the greater good? Ethical reflections on interviewing the ‘rich’ and ‘poor’ in qualitative research. Int. J. Soc. Res. Methodol. 23 , 593–602 (2020). This article argues that, in qualitative interview research, a clearer distinction needs to be drawn between ethical commitments to individual research participants and the group(s) to which they belong, a distinction that is often elided in existing ethics guidelines.
Yusupova, G. Exploring sensitive topics in an authoritarian context: an insider perspective. Soc. Sci. Q. 100 , 1459–1478 (2019).
Hemming, J. in Surviving Field Research: Working In Violent And Difficult Situations 21–37 (Routledge, 2009).
Murphy, E. & Dingwall, R. Informed consent, anticipatory regulation and ethnographic practice. Soc. Sci. Med. 65 , 2223–2234 (2007).
Kostovicova, D. & Knott, E. Harm, change and unpredictability: the ethics of interviews in conflict research. Qual. Res. 22 , 56–73 (2022). This article highlights how interviews need to be considered as ethically unpredictable moments where engaging with change among participants can itself be ethical.
Andersson, R. Illegality, Inc.: Clandestine Migration And The Business Of Bordering Europe (Univ. California Press, 2014).
Ellis, R. What do we mean by a “hard-to-reach” population? Legitimacy versus precarity as barriers to access. Sociol. Methods Res. https://doi.org/10.1177/0049124121995536 (2021).
Article Google Scholar
Braun, V. & Clarke, V. Thematic Analysis: A Practical Guide (Sage, 2022).
Alejandro, A. & Knott, E. How to pay attention to the words we use: the reflexive review as a method for linguistic reflexivity. Int. Stud. Rev. https://doi.org/10.1093/isr/viac025 (2022).
Alejandro, A., Laurence, M. & Maertens, L. in International Organisations and Research Methods: An Introduction (eds Badache, F., Kimber, L. R. & Maertens, L.) (Michigan Univ. Press, in the press).
Teeger, C. “Both sides of the story” history education in post-apartheid South Africa. Am. Sociol. Rev. 80 , 1175–1200 (2015).
Crotty, M. The Foundations Of Social Research: Meaning And Perspective In The Research Process (Routledge, 2020).
Potter, J. & Hepburn, A. Qualitative interviews in psychology: problems and possibilities. Qual. Res. Psychol. 2 , 281–307 (2005).
Taylor, S. What is Discourse Analysis? (Bloomsbury Publishing, 2013).
Riessman, C. K. Narrative Analysis (Sage, 1993).
Corbin, J. M. & Strauss, A. Grounded theory research: Procedures, canons and evaluative criteria. Qual. Sociol. 13 , 3–21 (1990).
Timmermans, S. & Tavory, I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol. Theory 30 , 167–186 (2012).
Fereday, J. & Muir-Cochrane, E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int. J. Qual. Meth. 5 , 80–92 (2006).
Potter, J. & Hepburn, A. Eight challenges for interview researchers. Handb. Interview Res. 2 , 541–570 (2012).
Tobias Neely, M. Fit to be king: how patrimonialism on Wall Street leads to inequality. Socioecon. Rev. 16 , 365–385 (2018).
Rao, A. H. Gendered interpretations of job loss and subsequent professional pathways. Gend. Soc. 35 , 884–909 (2021). This article used interview data from unemployed men and women to illuminate how job loss becomes a pivotal moment shaping men’s and women’s orientation to paid work, especially in terms of curtailing women’s participation in paid work.
Hart, C. G. Trajectory guarding: managing unwanted, ambiguously sexual interactions at work. Am. Sociol. Rev. 86 , 256–278 (2021).
Goode, J. P. & Stroup, D. R. Everyday nationalism: constructivism for the masses. Soc. Sci. Q. 96 , 717–739 (2015).
Antonsich, M. The ‘everyday’ of banal nationalism — ordinary people’s views on Italy and Italian. Polit. Geogr. 54 , 32–42 (2016).
Fox, J. E. & Miller-Idriss, C. Everyday nationhood. Ethnicities 8 , 536–563 (2008).
Yusupova, G. Cultural nationalism and everyday resistance in an illiberal nationalising state: ethnic minority nationalism in Russia. Nations National. 24 , 624–647 (2018).
Kiely, R., Bechhofer, F. & McCrone, D. Birth, blood and belonging: identity claims in post-devolution Scotland. Sociol. Rev. 53 , 150–171 (2005).
Brubaker, R. & Cooper, F. Beyond ‘identity’. Theory Soc. 29 , 1–47 (2000).
Brubaker, R. Ethnicity Without Groups (Harvard Univ. Press, 2004).
Knott, E. Kin Majorities: Identity And Citizenship In Crimea And Moldova From The Bottom-Up (McGill Univ. Press, 2022).
Bucher, B. & Jasper, U. Revisiting ‘identity’ in international relations: from identity as substance to identifications in action. Eur. J. Int. Relat. 23 , 391–415 (2016).
Carter, P. L. Stubborn Roots: Race, Culture And Inequality In US And South African Schools (Oxford Univ. Press, 2012).
Bourdieu, P. in Cultural Theory: An Anthology Vol. 1, 81–93 (eds Szeman, I. & Kaposy, T.) (Wiley-Blackwell, 2011).
Calarco, J. M. Negotiating Opportunities: How The Middle Class Secures Advantages In School (Oxford Univ. Press, 2018).
Carter, P. L. Keepin’ It Real: School Success Beyond Black And White (Oxford Univ. Press, 2005).
Carter, P. L. ‘Black’ cultural capital, status positioning and schooling conflicts for low-income African American youth. Soc. Probl. 50 , 136–155 (2003).
Warikoo, N. K. The Diversity Bargain Balancing Acts: Youth Culture in the Global City (Univ. California Press, 2011).
Morris, E. W. “Tuck in that shirt!” Race, class, gender and discipline in an urban school. Sociol. Perspect. 48 , 25–48 (2005).
Lareau, A. Social class differences in family–school relationships: the importance of cultural capital. Sociol. Educ. 60 , 73–85 (1987).
Warikoo, N. Addressing emotional health while protecting status: Asian American and white parents in suburban America. Am. J. Sociol. 126 , 545–576 (2020).
Teeger, C. Ruptures in the rainbow nation: how desegregated South African schools deal with interpersonal and structural racism. Sociol. Educ. 88 , 226–243 (2015). This article leverages ‘ deviant ’ cases in an interview study with South African high schoolers to understand why the majority of participants were reluctant to code racially charged incidents at school as racist.
Ispa-Landa, S. & Conwell, J. “Once you go to a white school, you kind of adapt” black adolescents and the racial classification of schools. Sociol. Educ. 88 , 1–19 (2015).
Dwyer, P. J. Punitive and ineffective: benefit sanctions within social security. J. Soc. Secur. Law 25 , 142–157 (2018).
Summers, K. & Young, D. Universal simplicity? The alleged simplicity of Universal Credit from administrative and claimant perspectives. J. Poverty Soc. Justice 28 , 169–186 (2020).
Summers, K. et al. Claimants’ Experiences Of The Social Security System During The First Wave Of COVID-19 . https://www.distantwelfare.co.uk/winter-report (2021).
Desmond, M. Evicted: Poverty And Profit In The American City (Crown Books, 2016).
Reyes, V. Three models of transparency in ethnographic research: naming places, naming people and sharing data. Ethnography 19 , 204–226 (2018).
Robson, C. & McCartan, K. Real World Research (Wiley, 2016).
Bauer, M. W. & Gaskell, G. Qualitative Researching With Text, Image And Sound: A Practical Handbook (SAGE, 2000).
Lareau, A. Listening To People: A Practical Guide To Interviewing, Participant Observation, Data Analysis And Writing It All Up (Univ. Chicago Press, 2021).
Lincoln, Y. S. & Guba, E. G. Naturalistic Inquiry (Sage, 1985).
Jerolmack, C. & Khan, S. Talk is cheap. Sociol. Methods Res. 43 , 178–209 (2014).
Morse, J. M. Styles of collaboration in qualitative inquiry. Qual. Health Res. 18 , 3–4 (2008).
ADS Google Scholar
Lamont, M. et al. Getting Respect: Responding To Stigma And Discrimination In The United States, Brazil And Israel (Princeton Univ. Press, 2016).
Cheek, J. Researching collaboratively: implications for qualitative research and researchers. Qual. Health Res. 18 , 1599–1603 (2008).
Botha, L. Mixing methods as a process towards indigenous methodologies. Int. J. Soc. Res. Methodol. 14 , 313–325 (2011).
Howlett, M. Looking at the ‘field’ through a zoom lens: methodological reflections on conducting online research during a global pandemic. Qual. Res. https://doi.org/10.1177/1468794120985691 (2021).
Reñosa, M. D. C. et al. Selfie consents, remote rapport and Zoom debriefings: collecting qualitative data amid a pandemic in four resource-constrained settings. BMJ Glob. Health 6 , e004193 (2021).
Mwambari, D., Purdeková, A. & Bisoka, A. N. Covid-19 and research in conflict-affected contexts: distanced methods and the digitalisation of suffering. Qual. Res. https://doi.org/10.1177/1468794121999014 (2021).
Colom, A. Using WhatsApp for focus group discussions: ecological validity, inclusion and deliberation. Qual. Res. https://doi.org/10.1177/1468794120986074 (2021).
Kaufmann, K. & Peil, C. The mobile instant messaging interview (MIMI): using WhatsApp to enhance self-reporting and explore media usage in situ. Mob. Media Commun. 8 , 229–246 (2020).
Gibson, K. Bridging the digital divide: reflections on using WhatsApp instant messenger interviews in youth research. Qual. Res. Psychol. 19 , 611–631 (2020).
Lawrence, L. Conducting cross-cultural qualitative interviews with mainland Chinese participants during COVID: lessons from the field. Qual. Res. https://doi.org/10.1177/1468794120974157 (2020).
Ponzoni, E. Windows of understanding: broadening access to knowledge production through participatory action research. Qual. Res. 16 , 557–574 (2016).
Kong, T. S. Gay and grey: participatory action research in Hong Kong. Qual. Res. 18 , 257–272 (2018).
Marzi, S. Participatory video from a distance: co-producing knowledge during the COVID-19 pandemic using smartphones. Qual. Res. https://doi.org/10.1177/14687941211038171 (2021).
Kvale, S. & Brinkmann, S. InterViews: Learning The Craft Of Qualitative Research Interviewing (Sage, 2008).
Rao, A. H. The ideal job-seeker norm: unemployment and marital privileges in the professional middle-class. J. Marriage Fam. 83 , 1038–1057 (2021).
Rivera, L. A. Ivies, extracurriculars and exclusion: elite employers’ use of educational credentials. Res. Soc. Stratif. Mobil. 29 , 71–90 (2011).
Download references
Acknowledgements
The authors are grateful to the MY421 team and students for prompting how best to frame and communicate issues pertinent to in-depth interview studies.
Author information
Authors and affiliations.
Department of Methodology, London School of Economics, London, UK
Eleanor Knott, Aliya Hamid Rao, Kate Summers & Chana Teeger
You can also search for this author in PubMed Google Scholar
Contributions
The authors contributed equally to all aspects of the article.
Corresponding author
Correspondence to Eleanor Knott .
Ethics declarations
Competing interests.
The authors declare no competing interests.
Peer review
Peer review information.
Nature Reviews Methods Primers thanks Jonathan Potter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A pre-written interview outline for a semi-structured interview that provides both a topic structure and the ability to adapt flexibly to the content and context of the interview and the interaction between the interviewer and participant. Others may refer to the topic guide as an interview protocol.
Here we refer to the participants that take part in the study as the sample. Other researchers may refer to the participants as a participant group or dataset.
This involves dividing a population into smaller groups based on particular characteristics, for example, age or gender, and then sampling randomly within each group.
A sampling method where the guiding logic when deciding who to recruit is to achieve the most relevant participants for the research topic, in terms of being rich in information or insights.
Researchers ask participants to introduce the researcher to others who meet the study’s inclusion criteria.
Similar to stratified sampling, but participants are not necessarily randomly selected. Instead, the researcher determines how many people from each category of participants should be recruited. Recruitment can happen via snowball or purposive sampling.
A method for developing, analysing and interpreting patterns across data by coding in order to develop themes.
An approach that interrogates the explicit, implicit and taken-for-granted dimensions of language as well as the contexts in which it is articulated to unpack its purposes and effects.
A form of transcription that simplifies what has been said by removing certain verbal and non-verbal details that add no further meaning, such as ‘ums and ahs’ and false starts.
The analytic framework, theoretical approach and often hypotheses, are developed prior to examining the data and then applied to the dataset.
The analytic framework and theoretical approach is developed from analysing the data.
An approach that combines deductive and inductive components to work recursively by going back and forth between data and existing theoretical frameworks (also described as an iterative approach). This approach is increasingly recognized not only as a more realistic but also more desirable third alternative to the more traditional inductive versus deductive binary choice.
A theoretical apparatus that emphasizes the role of cultural processes and capital in (intergenerational) social reproduction.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
About this article
Cite this article.
Knott, E., Rao, A.H., Summers, K. et al. Interviews in the social sciences. Nat Rev Methods Primers 2 , 73 (2022). https://doi.org/10.1038/s43586-022-00150-6
Download citation
Accepted : 14 July 2022
Published : 15 September 2022
DOI : https://doi.org/10.1038/s43586-022-00150-6
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
This article is cited by
What the textbooks don’t teach about the reality of running a digitally enabled health study: a phenomenological interview study.
- Meghan Bradway
- Elia Garbarron
- Eirik Årsand
BMC Digital Health (2024)
Development of a digital intervention for psychedelic preparation (DIPP)
- Rosalind G. McAlpine
- Matthew D. Sacchet
- Sunjeev K. Kamboj
Scientific Reports (2024)
‘Life Minus Illness = Recovery’: A Phenomenological Study About Experiences and Meanings of Recovery Among Individuals with Serious Mental Illness from Southern India
- Srishti Hegde
- Shalini Quadros
- Vinita A. Acharya
Community Mental Health Journal (2024)
Can dry rivers provide a good quality of life? Integrating beneficial and detrimental nature’s contributions to people over time
- Néstor Nicolás-Ruiz
- María Luisa Suárez
- Cristina Quintas-Soriano
Ambio (2024)
Is e-business breaking down barriers for Bangladesh’s young female entrepreneurs during the COVID-19 pandemic? A qualitative study
- Md. Fouad Hossain Sarker
- Sayed Farrukh Ahmed
- Md. Salman Sohel
SN Social Sciences (2024)
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Qualitative research: open-ended and closed-ended questions
12 april 2019 • 2160 words, 9 min. read Latest update : 30 march 2021
By Pierre-Nicolas Schwab
Our guide to market research can be downloaded free of charge
From a very young age, we have been taught what open-ended , and closed-ended questions are. How are these terms applied to qualitative research methods , and in particular to interviews?
Kathryn J. Roulston reveals her definitions of an open-ended and closed-ended question in qualitative interviews in the SAGE Encyclopedia on Qualitative Research Methods . If you want to better understand how qualitative methods fit within a market research approach, we suggest you take a look at our step-by-step guide to market research which can be downloaded in our white papers section (free of charge and direct; we won’t ask you any contact details first).
credits : Shutterstock
Introduction
- Closed-ended question
- Open-ended question
Examples of closed and open-ended questions for satisfaction research
Examples of closed and open-ended questions for innovation research, some practical advice.
Let us begin by pointing out that open and closed-ended questions do not at first glance serve the same purpose in market research. Instead, open-ended questions are used in qualitative research (see the video above for more information) and closed-ended questions are used in quantitative research. But this is not an absolute rule.
In this article, you will, therefore, discover the definitions of closed and open-ended questions. We will also explain how to use them. Finally, you will find examples of how to reformulate closed-ended questions into open-ended questions in the case of :
- satisfaction research
- innovation research
Essential elements to remember
Open-ended questions:
- for qualitative research (interviews and focus groups)
- very useful in understanding in detail the respondent and his or her position concerning a defined topic/situation
- particularly helpful in revealing new aspects , sub-themes, issues, and so forth that are unknown or unidentified
Closed-ended questions:
- for quantitative research (questionnaires and surveys)
- suitable for use with a wide range of respondents
- allow a standardised analysis of the data
- are intended to confirm the hypotheses (previously stated in the qualitative part)
A closed-ended question
A closed-ended question offers, as its name suggests, a limited number of answers. For example, the interviewee may choose a response from a panel of given proposals or a simple “yes” or “no”. They are intended to provide a precise, clearly identifiable and easily classified answer.
This type of question is used in particular during interviews whose purpose is to be encoded according to pre-established criteria. There is no room for free expression, as is the case for open-ended questions. Often, this type of question is integrated into 1-to-1 interview guides and focus groups and allows the interviewer to collect the same information from a wide range of respondents in the same format. Indeed, closed-ended questions are designed and oriented to follow a pattern and framework predefined by the interviewer.
Two forms of closed-ended questions were identified by the researchers: specific closed-ended questions , where respondents are offered choice answers, and implicit closed-ended questions , which include assumptions about the answers that can be provided by respondents.
A specific closed-ended question would be formulated as follows, for example: “how many times a week do you eat pasta: never, once or twice a week, 3 to 4 times, 5 times a week or more?” The adapted version in the form of an implicit closed-ended question would be formulated as follows: “how many times a week do you eat pasta? ». The interviewer then assumes that the answers will be given in figures.
The Net Promoter Score (or NPS) is an example of closed question (see example above)
While some researchers consider the use of closed-ended questions to be restrictive, others see in these questions – combined with open-ended questions – the possibility of generating different data for analysis. How these closed-ended questions can be used, formulated, sequenced, and introduced in interviews depends heavily upon the studies and research conducted upstream.
[call-to-action-read id=”35845″]
In what context are closed-ended questions used?
- Quantitative research (tests, confirmation of the qualitative research and so on).
- Research with a large panel of respondents (> 100 people)
- Recurrent research whose results need to be compared
- When you need confirmation, and the possible answers are limited in effect
An open-ended question
An open-ended question is a question that allows the respondent to express himself or herself freely on a given subject. This type of question is, as opposed to closed-ended questions, non-directive and allows respondents to use their own terms and direct their response at their convenience.
Open-ended questions, and therefore without presumptions, can be used to see which aspect stands out from the answers and thus could be interpreted as a fact, behaviour, reaction, etc. typical to a defined panel of respondents.
For example, we can very easily imagine open-ended questions such as “describe your morning routine”. Respondents are then free to describe their routine in their own words, which is an important point to consider. Indeed, the vocabulary used is also conducive to analysis and will be an element to be taken into account when adapting an interview guide, for example, and/or when developing a quantitative questionnaire.
As we detail in our market research whitepaper , one of the recommendations to follow when using open-ended questions is to start by asking more general questions and end with more detailed questions. For example, after describing a typical day, the interviewer may ask for clarification on one of the aspects mentioned by the respondent. Also, open-ended questions can also be directed so that the interviewee evokes his or her feelings about a situation he or she may have mentioned earlier.
In what context are open-ended questions used?
- Mainly in qualitative research (interviews and focus groups)
- To recruit research participants
- During research to test a design, a proof-of-concept, a prototype, and so on, it is essential to be able to identify the most appropriate solution.
- Analysis of consumers and purchasing behaviour
- Satisfaction research , reputation, customer experience and loyalty research, and so forth.
- To specify the hypotheses that will enable the quantitative questionnaire to be drawn up and to propose a series of relevant answers (to closed-ended questions ).
It is essential for the interviewer to give respondents a framework when using open-ended questions. Without this context, interviewees could be lost in the full range of possible responses, and this could interfere with the smooth running of the interview. Another critical point concerning this type of question is the analytical aspect that follows. Indeed, since respondents are free to formulate their answers, the data collected will be less easy to classify according to fixed criteria.
The use of open-ended questions in quantitative questionnaires
Rules are made to be broken; it is well known. Most quantitative questionnaires, therefore, contain free fields in which the respondent is invited to express his or her opinions in a more “free” way. But how to interpret these answers?
When the quantity of answers collected is small (about ten) it will be easy to proceed manually, possibly by coding (for more information on the coding technique, go here ). You will thus quickly identify the main trends and recurring themes.
On the other hand, if you collect hundreds or even thousands of answers, the analysis of these free answers will be much more tedious. How can you do it? In this case, we advise you to use a semantic analysis tool. This is most often an online solution, specific to a language, which is based on an NLP (Natural Language Processing) algorithm. This algorithm will, very quickly, analyse your corpus and bring out the recurring themes . It is not a question here of calculating word frequencies, but instead of working on semantics to analyse the repetition of a subject.
Of course, the use of open-ended questions in interviews does not exclude the use of closed-ended questions. Alternating these two types of questions in interviews, whether 1-to-1 interviews, group conversations or focus groups, is conducive not only to maintaining a specific dynamic during the interview but also to be able to frame specific responses while leaving certain fields of expression free. In general, it is interesting for the different parties that the interview ends with an open-ended question where the interviewer asks the interviewee if he or she has anything to add or if he or she has any questions.
In this type of research, you confront the respondent with a new, innovative product or service. It is therefore important not to collect superficial opinions but to understand in depth the respondent’s attitude towards the subject of the market research.
As you will have understood, open-ended questions are particularly suitable for qualitative research (1-to-1 interviews and focus groups). How should they be formulated?
The Five W’s; (who did what, where, when, and why ) questioning method should be used rigorously and sparingly :
- Who? Who? What? Where? When? How? How much? “are particularly useful for qualitative research and allow you to let your interlocutor develop and elaborate a constructed and informative answer.
- Use the CIT (Critical Incident Technique) method with formulations that encourage your interviewer to go into the details of an experience: “Can you describe/tell me…? “, ” What did you feel? “, ” According to you… “
- Avoid asking “Why?”: this question may push the interviewer into a corner, and the interviewer may seek logical reasoning for his or her previous answer. Be gentle with your respondents by asking them to tell you more, to give you specific examples, for example.
In contrast, closed-ended questions are mainly used and adapted to quantitative questionnaires since they facilitate the analysis of the results by framing the participants’ answers.
Image: Shutterstock
- Market research methods
Pour offrir les meilleures expériences, nous utilisons des technologies telles que les cookies pour stocker et/ou accéder aux informations des appareils. Le fait de consentir à ces technologies nous permettra de traiter des données telles que le comportement de navigation ou les ID uniques sur ce site. Le fait de ne pas consentir ou de retirer son consentement peut avoir un effet négatif sur certaines caractéristiques et fonctions.
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Open-ended interview questions and saturation
Susan c weller, ben vickers, h russell bernard, alyssa m blackburn, stephen borgatti, clarence c gravlee, jeffrey c johnson.
- Author information
- Article notes
- Copyright and License information
Competing Interests: The authors have declared that no competing interests exist.
‡ These authors also contributed equally to this work.
* E-mail: [email protected]
Contributed equally.
Received 2018 Feb 16; Accepted 2018 May 22; Collection date 2018.
This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). Here, we propose and test a refined definition of saturation as obtaining the most salient items in a set of qualitative interviews (where items can be material things or concepts, depending on the topic of study) rather than attempting to obtain all the items . Salient items have higher prevalence and are more culturally important. To do this, we explore saturation, salience, sample size, and domain size in 28 sets of interviews in which respondents were asked to list all the things they could think of in one of 18 topical domains. The domains—like kinds of fruits (highly bounded) and things that mothers do (unbounded)—varied greatly in size. The datasets comprise 20–99 interviews each (1,147 total interviews). When saturation was defined as the point where less than one new item per person would be expected, the median sample size for reaching saturation was 75 (range = 15–194). Thematic saturation was, as expected, related to domain size. It was also related to the amount of information contributed by each respondent but, unexpectedly, was reached more quickly when respondents contributed less information. In contrast, a greater amount of information per person increased the retrieval of salient items. Even small samples ( n = 10) produced 95% of the most salient ideas with exhaustive listing, but only 53% of those items were captured with limited responses per person (three). For most domains, item salience appeared to be a more useful concept for thinking about sample size adequacy than finding the point of thematic saturation. Thus, we advance the concept of saturation in salience and emphasize probing to increase the amount of information collected per respondent to increase sample efficiency.
Introduction
Open-ended questions are used alone or in combination with other interviewing techniques to explore topics in depth, to understand processes, and to identify potential causes of observed correlations. Open-ended questions may produce lists, short answers, or lengthy narratives, but in all cases, an enduring question is: How many interviews are needed to be sure that the range of salient items (in the case of lists) and themes (in the case of narratives) are covered. Guidelines for collecting lists, short answers, and narratives often recommend continuing interviews until saturation is reached. The concept of theoretical saturation —the point where the main ideas and variations relevant to the formulation of a theory have been identified—was first articulated by Glaser and Strauss [ 1 , 2 ] in the context of how to develop grounded theory. Most of the literature on analyzing qualitative data, however, deals with observable thematic saturation —the point during a series of interviews where few or no new ideas, themes, or codes appear [ 3 – 6 ].
Since the goal of research based on qualitative data is not necessarily to collect all or most ideas and themes but to collect the most important ideas and themes, salience may provide a better guide to sample size adequacy than saturation. Salience (often called cultural or cognitive salience) can be measured by the frequency of item occurrence (prevalence) or the order of mention [ 7 , 8 ]. These two indicators tend to be correlated [ 9 ]. In a set of lists of birds, for example, robins are reported more frequently and appear earlier in responses than are penguins. Salient terms are also more prevalent in everyday language [ 10 – 12 ]. Item salience also may be estimated by combining an item’s frequency across lists with its rank/position on individual lists [ 13 – 16 ].
In this article, we estimate the point of complete thematic saturation and the associated sample size and domain size for 28 sets of interviews in which respondents were asked to list all the things they could think of in one of 18 topical domains. The domains—like kinds of fruits (highly bounded) and things that mothers do (unbounded)—varied greatly in size. We also examine the impact of the amount of information produced per respondent on saturation and on the number of unique items obtained by comparing results generated by asking respondents to name all the relevant things they can with results obtained from a limited number of responses per question, as with standard open-ended questioning. Finally, we introduce an additional type of saturation based on the relative salience of items and themes— saturation in salience —and we explore whether the most salient items are captured at minimal sample sizes. A key conclusion is that saturation may be more meaningfully and more productively conceived of as the point where the most salient ideas have been obtained .
Recent research on saturation
Increasingly, researchers are applying systematic analysis and sampling theory to untangle the problems of saturation and sample size in the enormous variety of studies that rely on qualitative data—including life-histories, discourse analysis, ethnographic decision modeling, focus groups, grounded theory, and more. For example, Guest et al.[ 17 ] and others[ 18 – 19 ] found that about 12–16 interviews were adequate to achieve thematic saturation. Similarly, Hagaman and Wutich [ 20 ] found that they could reliably retrieve the three most salient themes from each of the four sites in the first 16 interviews.
Galvin[ 21 ] and Fugard and Potts[ 22 ] framed the sample size problem for qualitative data in terms of the likelihood that a specific idea or theme will or will not appear in a set of interviews, given the prevalence of those ideas in the population. They used traditional statistical theory to show that small samples retrieve only the most prevalent themes and that larger samples are more sensitive and can retrieve less prevalent themes as well. This framework can be applied to the expectation of observing or not observing almost anything. Here it would apply to the likelihood of observing a theme in a set of narrative responses, but it applies equally well for situations such as behavioral observations, where specific behaviors are being observed and sampled[ 23 ]. For example, to obtain ideas or themes that would be reported by about one out of five people (0.20 prevalence) or a behavior with the same prevalence, there is a 95% likelihood of seeing those themes or behaviors at least once in 14 interviews—if those themes or behaviors are independent.
Saturation and sample size have also begun to be examined with multivariate models and simulations. Tran et al. [ 24 ] estimated thematic saturation and the total number of themes from open-ended questions in a large survey and then simulated data to test predictions about sample size and saturation. They assumed that items were independent and found that sample sizes greater than 50 would add less than one new theme per additional person interviewed.
Similarly, Lowe et al. [ 25 ] estimated saturation and domain size in two examples and in simulated datasets, testing the effect of various parameters. Lowe et al. found that responses were not independent across respondents and that saturation may never be reached. In this context, non-independence refers to the fact that some responses are much more likely than others to be repeated across people. Instead of complete saturation, they suggested using a goal such as obtaining a percentage of the total domain that one would like to capture (e.g., 90%) and the average prevalence of items one would like to observe to estimate the appropriate sample size. For example, to obtain 90% of items with an average prevalence of 0.20, a sample size of 36 would be required. Van Rijnsoever [ 26 ] used simulated datasets to study the accumulation of themes across sample size increments and assessed the effect of different sampling strategies, item prevalence, and domain size on saturation. Van Rijnsoever’s results indicated that the point of saturation was dependent on the prevalence of the items.
As modeling estimates to date have been based on only one or two real-world examples, it is clear that more empirical examples are needed. Here, we use 28 real-world examples to estimate the impact of sample size, domain size, and amount of information per respondent on saturation and on the total number of items obtained. Using the proportion of people in a sample that mentioned an item as a measure of salience, we find that even small samples may adequately capture the most salient items.
Materials and methods
The datasets comprise 20–99 interviews each (1,147 total interviews). Each example elicits multiple responses from each individual in response to an open-ended question (“Name all the … you can think of”) or a question with probes (“What other … are there?”).
Data were obtained by contacting researchers who published analyses of free lists. Examples with 20 or more interviews were selected so that saturation could be examined incrementally through a range of sample sizes. Thirteen published examples were obtained on: illness terms [ 27 ] (in English and in Spanish); birds, flowers, and fabrics [ 28 ]; recreational/street drugs and fruits [ 29 ]; things mothers do (online, face-to-face, and written administration) and racial and ethnic groups [ 30 ] (online, face-to-face, and written administration). Fifteen unpublished classroom educational examples were obtained on: soda pops (Weller, n.d.); holidays (two replications), things that might appear in a living room, characteristics of a good leader (two replications), a good team (two replications), and a good team player (Johnson, n.d.); and bad words, industries (two replications), cultural industries (two replications), and scary things (Borgatti, n.d.). (Original data appear online in S1 Appendix The Original Data for the 28 Examples.)
Some interviews were face to face, some were written responses, and some were administered on-line. Investigators varied in their use of prompts, using nonspecific (What other … are there?), semantic (repeating prior responses and then asking for others), and/or alphabetic prompts (going through the alphabet and asking for others). Brewer [ 29 ] and Gravlee et al. [ 30 ] specifically examined the effect of prompting on response productivity, although the Brewer et al. examples in these analyses contain results before extensive prompting and the Gravlee et al. examples contain results after prompting. The 28 examples, their topic, source, sample size, the question used in the original data collection, and the three most frequently mentioned items appear in Table 1 . All data were collected and analyzed without personal identifying information.
Table 1. The examples.
For each example, statistical models describe the pattern of obtaining new or unique items with incremental increases in sample size. Individual lists were first analyzed with Flame [ 31 , 32 ] to provide the list of unique items for each example and the Smith [ 14 ] and Sutrop [ 15 ] item salience scores. Duplicate items due to spelling, case errors, spacing, or variations were combined.
To help develop an interviewing stopping rule, a simple model was used to predict the unique number of items contributed by each additional respondent. Generalized linear models (GLM, log-linear models for count data) were used to predict the unique number of items added by each respondent (incrementing sample size), because number of unique items added by each respondent (count data) is approximately Poisson distributed. For each example, models were fit with ordinary least squares linear regression, Poisson, and negative binomial probability distributions. Respondents were assumed to be in random order, in the order in which they occurred in each dataset, although in some cases they were in the order they were interviewed. Goodness-of-fit was compared across the three models with minimized deviants (the Akaike Information Criterion, AIC) to find the best-fitting model [ 33 ]. Using the best-fitting model for each example, the point of saturation was estimated as the point where the expected number of new items was one or less. Sample size and domain size were estimated at the point of saturation, and total domain size was estimated for an infinite sample size from the model for each example as the limit of a geometric series (assuming a negative slope).
Because the GLM models above used only incremental sample size to predict the total number of unique items (domain size) and ignored variation in the number of items provided by each person and variation in item salience, an additional analysis was used to estimate domain size while accounting for subject and item heterogeneity. For that analysis, domain size was estimated with a capture-recapture estimation technique used for estimating the size of hidden populations. Domain size was estimated from the total number of items on individual lists and the number of matching items between pairs of lists with a log-linear analysis. For example, population size can be estimated from the responses of two people as the product of their number of responses divided by the number of matching items (assumed to be due to chance). If Person#1 named 15 illness terms and Person#2 named 31 terms and they matched on five illnesses, there would be 41 unique illness terms and the estimated total number of illness terms based on these two people would be (15 x 31) /5 = 93.
A log-linear solution generalizes this logic from a 2 x 2 table to a 2 K table [ 34 ]. the capture–recapture solution estimates total population size for hidden populations using the pattern of recapture (matching) between pairs of samples (respondents) to estimate the population size. An implementation in R with GLM uses a log-linear form to estimate population size based on recapture rates (Rcapture [ 35 , 36 ]). In this application, it is assumed that the population does not change between interviews (closed population) and models are fit with: (1) no variation across people or items (M 0 ); (2) variation only across respondents (M t ); (3) variation only across items (M h ); and (4) variation due to an interaction between people and items (M ht ). For each model, estimates were fit with binomial, Chao’s lower bound estimate, Poisson, Darroch log normal, and gamma distributions [ 35 ]. Variation among items (heterogeneity) is a test for a difference in the probabilities of item occurrence and, in this case, is equivalent to a test for a difference in item salience among the items. Due to the large number of combinations needed to estimate these models, Rcapture software estimates are provided for all four models only up to a sample of size 10. For larger sample sizes (all examples in this study had sample sizes of 20 or larger), only model 1 with no effects for people or items (the binomial model) and model 3 with item effects (item salience differences) were tested. Therefore, models were fit at size 10, to test all four models and then at the total available sample size.
Descriptive information for the examples appears in Table 2 . The first four columns list the name of the example, the sample size in the original study, the mean list length (with the range of the list length across respondents), and the total number of unique items obtained. For the Holiday1 example, interviews requested names of holidays (“Write down all the holidays you can think of”), there were 24 respondents, the average number of holidays listed per person (list length) was 13 (ranging from five to 29), and 62 unique holidays were obtained.
Table 2. Estimated point of saturation and domain size.
nbi = Negative binomial-identity, p = Poisson-log ; c = Chao’s Lower bound; g = gamma
Predicting thematic saturation from sample size
The free-list counts showed a characteristic descending curve where an initial person listed new themes and each additional person repeated some themes already reported and added new items, but fewer and fewer new items were added with incremental increases in sample size. All examples were fit using the GLM log-link and identity-link with normal, Poisson, and negative binomial distributions. The negative binomial model resulted in a better fit than the Poisson (or identity-link models) for most full-listing examples, providing the best fit to the downward sloping curve with a long tail. Of the 28 examples, only three were not best fit by negative binomial log-link models: the best-fitting model for two examples was the Poisson log-link model (GoodTeam1 and GoodTeam2Player) and one was best fit by the negative binomial identity-link model (CultInd1).
Sample size was a significant predictor of the number of new items for 21 of the 28 examples. Seven examples did not result in a statistically significant fit (Illnesses-US, Holiday2, Industries1, Industries2, GoodTLeader, GoodTeam2Player, and GoodTeam3). The best-fitting model was used to predict the point of saturation and domain size for all 28 examples ( S2 Appendix GLM Statistical Model Results for the 28 Examples).
Using the best-fitting GLM models we estimated the predicted sample size for reaching saturation. Saturation was defined as the point where less than one new item would be expected for each additional person interviewed. Using the models to solve for the sample size (X) when only one item was obtained per person (Y = 1) and rounding up to the nearest integer, provided the point of saturation (Y≤1.0). Table 2 , column five, reports the sample size where saturation was reached (N SAT ). For Holiday1, one or fewer new items were obtained per person when X = 16.98. Rounding up to the next integer provides the saturation point (N SAT = 17). For the Fruit domain, saturation occurred at a sample size of 15.
Saturation was reached at sample sizes of 15–194, with a median sample size of 75. Only five examples (Holiday1, Fruits, Birds, Flowers, and Drugs) reached saturation within the original study sample size and most examples did not reach saturation even after four or five dozen interviews. A more liberal definition of saturation, defined as the point where less than two new items would be expected for each additional person (solving for Y≤2), resulted in a median sample size for reaching saturation of 50 (range 10–146).
Some domains were well bounded and were elicited with small sample sizes. Some were not. In fact, most of the distributions exhibited a very long tail—where many items were mentioned by only one or two people. Fig 1 shows the predicted curves for all examples for sample sizes of 1 to 50. Saturation is the point where the descending curve crosses Y = 1 (or Y = 2). Although the expected number of unique ideas or themes obtained for successive respondents tends to decrease as the sample size increases, this occurs rapidly in some domains and slowly or not at all in other domains. Fruits, Holiday1, and Illness-G are domains with the three bottom-most curves and the steepest descent, indicating that saturation was reached rapidly and with small sample sizes. The three top-most curves are the Moms-F2F, Industries1, and Industries2 domains, which reached saturation at very large sample sizes or essentially did not reach saturation.
Fig 1. The number of unique items provided with increasing sample size.
Estimating domain size
Because saturation appeared to be related to domain size and some investigators state that a percentage of the domain might be a better standard [ 25 ], domain size was also estimated. First, total domain size was estimated with the GLM models obtained above. Domain size was estimated at the point of saturation by cumulatively summing the number of items obtained for sample sizes n = 1, n = 2, n = 3, … to N SAT . For the Holiday1 sample, summing the number of predicted unique items for sample sizes n = 1 to n = 17 should yield 51 items ( Table 2 , Domain Size at Saturation, D SAT ). Thus, the model predicted that approximately 51 holidays would be obtained by the time saturation was reached.
The total domain size was estimated using a geometric series, summing the estimated number of unique items obtained cumulatively across people in an infinitely large sample. For the Holiday1 domain, the total domain size was estimated as 56 (see Table 2 , Total Domain Size D TOT ). So for the Holiday1 domain, although the total domain size was estimated to be 57, the model predicted that saturation occurred when the sample size reached 17, and at that point 51 holidays should be retrieved. Model predictions were close to the empirical data, as 62 holidays were obtained with a sample of 24.
Larger sample sizes were needed to reach saturation in larger domains; the largest domains were MomsF2F, Industries1, and Industries2 each estimated to have about 1,000 items and more than 100 interviews needed to approach saturation. Saturation (Y≤1) tended to occur at about 90% of the total domain size. For Fruits, the domain size at saturation was 51 and the total domain size was estimated at 53 (51/53 = 96%) and for MomsF2F, domain size at saturation was 904 and total domain size was 951 (95%).
Second, total domain size was estimated using a capture-recapture log-linear model with a parameter for item heterogeneity [ 35 , 36 ]. A descending, concave curve is diagnostic of item heterogeneity and was present in almost all of the examples. The estimated population sizes using R-Capture appear in the last column of Table 2 . When the gamma distribution provided the best fit to the response data, the domain size increased by an order of magnitude as did the standard error on that estimate. When responses fit a gamma distribution, the domain may be extremely large and may not readily reach saturation.
Inclusion of the pattern of matching items across people with a parameter for item heterogeneity (overlap in items between people due to salience) resulted in larger population size estimates than those above without heterogeneity. Estimation from the first two respondents was not helpful and provided estimates much lower than those from any of the other methods. The simple model without subject or item effects (the binomial model) did not fit any of the examples. Estimation from the first 10 respondents in each example suggested that more variation was due to item heterogeneity than to item and subject heterogeneity, so we report only the estimated domain size with the complete samples accounting for item heterogeneity in salience.
Overall, the capture–recapture estimates incorporating the effect of salience were larger than the GLM results above without a parameter for salience. For Fruits, the total domain size was estimated as 45 from the first two people; as 88 (gamma distribution estimate) from the first 10 people with item heterogeneity and as 67 (Chao lower bound estimate) with item and subject heterogeneity; and using the total sample ( n = 33) the binomial model (without any heterogeneity parameters) estimated the domain size as 62 (but did not fit the data) and with item heterogeneity the domain size was estimated as 73 (the best-fitting model used the Chao lower bound estimate). Thus, the total domain size for Fruits estimated with a simple GLM model was 53 and with a capture–recapture model (including item heterogeneity) was 73 ( Table 2 , last column). Similarly, the domain size for Holiday1 was estimated at 57 with the simple GLM model and 100 with capture-recapture model. Domain size estimates suggest that even the simplest domains can be large and that inclusion of item heterogeneity increases domain size estimates.
Saturation and the number of responses per person
The original examples used an exhaustive listing of responses to obtain about a half dozen (GoodLeader and GoodTeam2Player) to almost three dozen responses per person (Industries1 and Industries2). A question is whether saturation and the number of unique ideas obtained might be affected by the number of responses per person. Since open-ended questions may obtain only a few responses, we limited the responses to a maximum of three per person, truncating lists to see the effect on the number of items obtained at different sample sizes and the point of saturation.
When more information (a greater number of responses) was collected per person, more unique items were obtained even at smaller sample sizes ( Table 3 ). The amount of information retrieved per sample can be conceived of in terms of bits of information per sample and is roughly the average number of responses per person times the sample size so that, with all other things being equal, larger sample sizes with less probing should approach the same amount of information obtained with smaller samples and more probing. So, for a given sample size, a study with six responses per person should obtain twice as much information as a study with three responses per person. In the GoodLeader, GoodTeam1, and GoodTeam2Player examples, the average list length was approximately six and when the sample size was 10 (6 x 10 = 60 bits of information), approximately twice as many items were obtained as when lists were truncated to three responses (3 x 10 = 30 bits of information).
Table 3. Comparison of number of unique items obtained with full free lists and with three or fewer responses.
Increasing the sample size proportionately increases the amount of information, but not always. For Scary Things, 5.6 bits more information were collected per person with full listing (16.9 average list length) than with three or fewer responses per person (3.0 list length); and the number of items obtained in a sample size of 10 with full listing (102) was roughly 5.6 times greater than that obtained with three responses per person (18 items). However, at a sample size of 20 the number of unique items with free lists was only 4.5 times larger (153) than the number obtained with three responses per person (34). Across examples , interviews that obtained more information per person were more productive and obtained more unique items overall even with smaller sample sizes than did interviews with only three responses per person .
Using the same definition of saturation (the point where less than one new item would be expected for each additional person interviewed), less information per person resulted in reaching saturation at much smaller sample sizes. Fig 2 shows the predicted curves for all examples when the number of responses per person is three (or fewer). The Holiday examples reached saturation (fewer than one new item per person) with a sample size of 17 (Holiday1) with 13.0 average responses per person and 87 (Holiday2) with 17.8 average responses ( Table 2 ), but reached saturation with a sample size of only 9 (Holiday 1 and Holiday2) when there were a maximum of three responses per person ( Table 3 , last column). With three or fewer responses per person, the median sample size for reaching saturation was 16 (range: 4–134). Thus, fewer responses per person resulted in reaching saturation at smaller sample sizes and resulted in fewer domain items.
Fig 2. The number of unique items provided with increasing sample size when there are three or fewer responses per person.
Salience and sample size
Saturation did not seem to be a useful guide for determining a sample size stopping point, because it was sensitive both to domain size and the number of responses per person. Since a main goal of open-ended interviews is to obtain the most important ideas and themes, it seemed reasonable to consider item salience as an alternative guide to assist with determining sample size adequacy. Here, the question would be: Whether or not complete saturation is achieved, are the most salient ideas and themes captured in small samples?
A simple and direct measure of item salience is the proportion of people in a sample that mentioned an item [ 37 ]. However, we examined the correlation between the sample proportions and two salience indices that combine the proportion of people mentioning an item with the item’s list position [ 13 – 15 ]. Because the item frequency distributions have long tails—there are many items mentioned by only one or two people—we focused on only those items mentioned by two or more people (24–204 items) and used the full lists provided by each respondent. The average Spearman correlation between the Smith and Sutrop indices in the 28 examples was 0.95 (average Pearson correlation 0.96, 95%CI: 0.92, 0.98), between the Smith index and the sample proportions was 0.89 (average Pearson 0.96, 95%CI: 0.915, 0.982), and between the Sutrop index and the sample proportions was 0.86 (average Pearson 0.88 95%CI: 0.753, 0.943). Thus, the three measures were highly correlated in 28 examples that varied in content, number of items, and sample size—validating the measurement of a single construct.
To test whether the most salient ideas and themes were captured in smaller samples or with limited probing, we used the sample proportions to estimate item salience and compared the set of most salient items across sample sizes and across more and less probing. Specifically, we defined a set of salient items for each example as those mentioned by 20% or more in the sample of size 20 (because all examples had at least 20) with full-listing (because domains were more detailed). We compared the set of salient items with the set of items obtained at smaller sample sizes and with fewer responses per person.
The set size for salient items (prevalence ≥ 20%) was not related to overall domain size, but was an independent characteristic of each domain and whether there were core or prototypical items with higher salience. Most domains had about two dozen items mentioned by 20% or more of the original listing sample ( n = 20), but some domains had only a half dozen or fewer items (GoodLeader, GoodTeam2Player, GoodTeam3). With full listing, 26 of 28 examples captured more than 95% of the salient ideas in the first 10 interviews: 18 examples captured 100%, eight examples captured 95–99%, one example captured 91%, and one captured 80% ( Table 4 ). With a maximum of three responses per person, about two-thirds of the salient items (68%) were captured with 20 interviews and about half of the items (53%) were captured in the first 10 interviews. With a sample size of 20, a greater number of responses per person resulted in approximately 50% more items than with three responses per person. Extensive probing resulted in a greater capture of salient items even with smaller sample sizes.
Table 4. Capture of salient items with full free list and with three or fewer responses.
Summary and discussion.
The strict notion of complete saturation as the point where few or no new ideas are observed is not a useful concept to guide sample size decisions, because it is sensitive to domain size and the amount of information contributed by each respondent. Larger sample sizes are necessary to reach saturation for large domains and it is difficult to know, when starting a study, just how large the domain or set of ideas will be. Also, when respondents only provide a few responses or codes per person, saturation may be reached quickly. So, if complete thematic saturation is observed, it is difficult to know whether the domain is small or whether the interviewer did only minimal probing.
Rather than attempting to reach complete saturation with an incremental sampling plan, a more productive focus might be on gaining more depth with probing and seeking the most salient ideas. Rarely do we need all the ideas and themes, rather we tend to be looking for important or salient ideas. A greater number of responses per person resulted in the capture of a greater number of salient items. With exhaustive listing, the first 10 interviews obtained 95% of the salient ideas (defined here as item prevalence of 0.20 or more), while only 53% of those ideas were obtained in 10 interviews with three or fewer responses per person.
We used a simple statistical model to predict the number of new items added by each additional person and found that complete saturation was not a helpful concept for free-lists, as the median sample size was 75 to get fewer than one new idea per person. It is important to note that we assumed that interviews were in a random order or were in the order that the interviews were conducted and were not reordered to any kind of optimum. The reordering of respondents to maximally fit a saturation curve may make it appear that saturation has been reached at a smaller sample size [ 31 ].
Most of the examples examined in this study needed sample sizes larger than most qualitative researchers use to reach saturation. Mason’s [ 6 ] review of 298 PhD dissertations in the United Kingdom, all based on qualitative data, found a mean sample size of 27 (range 1–95). Here, few of the examples reached saturation with less than four dozen interviews. Even with large sample sizes, some domains may continue to add new items. For very large domains, an incremental sampling strategy may lead to dozens and dozens of interviews and still not reach complete saturation. The problem is that most domains have very long tails in the distribution of observed items, with many items mentioned by only one or two people. A more liberal definition of complete saturation (allowing up to two new items per person) allowed for saturation to occur at smaller sample sizes, but saturation still did not occur until a median sample size of 50.
In the examples we studied, most domains were large and domain size affected when saturation occurred. Unfortunately, there did not seem to be a good or simple way at the outset to tell if a domain would be large or small. Most domains were much larger than expected, even on simple topics. Domain size varied by substantive content, sample, and degree of heterogeneity in salience. Domain size and saturation were sample dependent, as the holiday examples showed. Also, domain size estimates did not mean that there are only 73 fruits, rather the pattern of naming fruits—for this particular sample—indicated a set size of 73.
It was impossible to know, when starting, if a topic or domain was small and would require 15 interviews to reach saturation or if the domain was large and would require more than 100 interviews to reach saturation. Although eight of the examples had sample sizes of 50–99, sample sizes in qualitative studies are rarely that large. Estimates of domain size were even larger when models incorporated item heterogeneity (salience). The Fruit example had an estimated domain size of 53 without item heterogeneity, but 73 with item heterogeneity. The estimated size of the Fabric domain increased from 210 to 753 when item heterogeneity was included.
The number of responses per person affected both saturation and the number of obtained items. A greater number of responses per person resulted in a greater yield of domain items. The bits of information obtained in a sample can be approximated by the product of the average number of responses per person (list length) and the number of people in a sample. However, doubling the sample size did not necessarily double the unique items obtained because of item salience and sampling variability. When only a few items are obtained from each person, only the most salient items tend to be provided by each person and fewer items are obtained overall.
Brewer [ 29 ] explored the effect of probing or prompting on interview yield. Brewer examined the use of a few simple prompts: simply asking for more responses, providing alphabetical cues, or repeating the last response(s) and asking again for more information. Semantic cueing, repeating prior responses and asking for more information, increased the yield by approximately 50%. The results here indicated a similar pattern. When more information was elicited per person , about 50% more domain items were retrieved than when people provided a maximum of three responses.
Interviewing to obtain multiple responses also affects saturation. With few responses per person, complete saturation was reached rapidly. Without extensive interview probing, investigators may reach saturation quickly and assume they have a sample sufficient to retrieve most of the domain items. Unfortunately, different degrees of salience among items may cause strong effects for respondents to repeat similar ideas—the most salient ideas—without elaborating on less salient or less prevalent ideas, resulting in a set of only the ideas with the very highest salience. If an investigator wishes to obtain most of the ideas that are relevant in a domain , a small sample with extensive probing (listing) will prove much more productive than a large sample with casual or no probing .
Recently, Galvin [ 21 ] and Fugard and Potts [ 22 ] framed sample size estimation for qualitative interviewing in terms of binomial probabilities. However, results for the 28 examples with multiple responses per person suggest that this may not be appropriate because of the interdependencies among items due to salience. The capture–recapture analysis indicated that none of the 28 examples fit the binomial distribution. Framing the sample size problem in terms that a specific idea or theme will or will not appear in a set of interviews may facilitate thinking about sample size, but such estimates may be misleading.
If a binomial distribution is assumed, sample size can be estimated from the prevalence of an idea in the population, from how confident you want to be in obtaining these ideas, and from how many times you would like these ideas to minimally appear across participants in your interviews. A binomial estimate assumes independence (no difference in salience across items) and predicts that if an idea or theme actually occurs in 20% of the population, there is a 90% or higher likelihood of obtaining those themes at least once in 11 interviews and a 95% likelihood in 14 interviews. In contrast, our results indicated that the heterogeneity in salience across items causes these estimates to underestimate the necessary sample size as items with ≥20% prevalence were captured in 10 interviews in only 64% of the samples with full listing and in only 4% (one) of samples with three or fewer responses.
Lowe et al. [ 25 ] also found that items were not independent and that binomial estimates significantly underestimated sample size. They proposed sample size estimation from the desired proportion of items at a given average prevalence. Their formula predicts that 36 interviews would be necessary to capture 90% of items with an average prevalence of 0.20, regardless of degree of heterogeneity in salience, domain size, or amount of information provided per respondent. Although they included a parameter for non-independence, their model does not seem to be accurate for cases with limited responses or for large domains.
Conclusions
In general , probing and prompting during an interview seems to matter more than the number of interviews . Thematic saturation may be an illusion and may result from a failure to use in-depth probing during the interview. A small sample ( n = 10) can collect some of the most salient ideas, but a small sample with extensive probing can collect most of the salient ideas. A larger sample ( n = 20) is more sensitive and can collect more prevalent and more salient ideas, as well as less prevalent ideas, especially with probing. Some domains, however, may not have items with high prevalence. Several of the domains examined had only a half dozen or fewer items with prevalence of 20% or more. The direct link between salience and population prevalence offers a rationale for sample size and facilitates study planning. If the goal is to get a few widely held ideas, a small sample size will suffice. If the goal is to explore a larger range of ideas, a larger sample size or extensive probing is needed. Sample sizes of one to two dozen interviews should be sufficient with exhaustive probing (listing interviews), especially in a coherent domain. Empirically observed stabilization of item salience may indicate an adequate sample size.
A next step would be to test whether these conclusions and recommendations hold for other types of open-ended questions, such as narratives, life histories, and open-ended questions in large surveys. Open-ended survey questions are inefficient and result in thin or sparse data with few responses per person because of a lack of prompting. Tran et al. [ 24 ] reported item prevalence of 0.025 in answers in a large Internet survey suggesting few responses per person. In contrast, we used an item prevalence of 0.20 and higher to identify the most salient items in each domain and the highest prevalence in each domain ranged from 0.30 to 0.80 ( Table 1 ). Inefficiency in open-ended survey questions is likely due to the dual purpose of the questions: They try to define the range of possible answers and get the respondent’s answer. A better approach might be to precede survey development with a dozen free-listing interviews to get the range of possible responses and then use that content to design structured survey questions.
Another avenue for investigation is how our findings on thematic saturation compare to theoretical saturation in grounded theory studies [ 2 , 38 , 39 ]. Grounded theory studies rely on theoretical sampling–-an iterative procedure in which a single interview is coded for themes; the next respondent is selected to discover new themes and relationships between themes; and so on, until no more relevant themes or inter-relationships are discovered and a theory is built to explain the facts/themes of the case under study. In contrast this study examined thematic saturation, the simple accumulation of ideas and themes, and found that saturation in salience was more attainable–-perhaps more important—than thematic saturation.
Supporting information
Acknowledgments.
We would like to thank Devon Brewer and Kristofer Jennings for providing feedback on an earlier version of this manuscript. We would also like to thank Devon Brewer for providing data from his studies on free-lists.
Data Availability
All relevant data are available as an Excel file in the Supporting Information files.
Funding Statement
This project was partially supported by the Agency for Healthcare Research and Quality (R24HS022134). Funding for the original data sets was from the National Science Foundation (#BCS-0244104) for Gravlee et al. (2013), from the National Institute on Drug Abuse (R29DA10640) for Brewer et al. (2002), and from the Air Force Office of Scientific Research for Brewer (1995). Content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
- 1. Glaser BG. The constant comparative method of qualitative analysis. Soc Probl. 1965; 12: 436−445. [ Google Scholar ]
- 2. Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New Brunswick, NJ: Aldine, 1967. [ Google Scholar ]
- 3. Lincoln YS, Guba EG. Naturalistic inquiry. Beverly Hills, CA: Sage, 1985. [ Google Scholar ]
- 4. Morse JM. Strategies for sampling In: Morse JM, editor, Qualitative Nursing Research: A Contemporary Dialogue. Rockville, MD: Aspen Press, 1989, pp. 117–131. [ Google Scholar ]
- 5. Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995; 18:179−183. [ DOI ] [ PubMed ] [ Google Scholar ]
- 6. Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research 2010; 11. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 (accessed December 26, 2017).
- 7. Thompson EC, Juan Z. Comparative cultural salience: measures using free-list data. Field Methods. 2006; 18: 398–412. [ Google Scholar ]
- 8. Romney A, D'Andrade R. Cognitive aspects of English kin terms. Am Anthro. 1964; 66: 146–170. [ Google Scholar ]
- 9. Bousfield WA, Barclay WD. The relationship between order and frequency of occurrence of restricted associative responses. J Exp Psych. 1950; 40: 643–647. [ DOI ] [ PubMed ] [ Google Scholar ]
- 10. Geeraerts D. Theories of lexical semantics. Oxford University Press, 2010. [ Google Scholar ]
- 11. Hajibayova L. Basic-Level Categories a Review. J of Info Sci. 2013; 1–12. [ Google Scholar ]
- 12. Berlin Brent. Ethnobiological classification In: Rosch E, Lloyd BB, eds. Cognition and Categorization. Hillsdale, NJ: Erlbaum; 1978, pp 9–26. [ Google Scholar ]
- 13. Smith JJ, Furbee L, Maynard K, Quick S, Ross L. Salience counts: A domain analysis of English color terms. J Linguistic Anthro. 1995; 5(2): 203–216. [ Google Scholar ]
- 14. Smith JJ, Borgatti SP. Salience counts-and so does accuracy: Correcting and updating a measure for free-list-item salience. J Linguistic Anthro. 1997; 7: 208–209. [ Google Scholar ]
- 15. Sutrop U. List task and a cognitive salience index. Field Methods. 2001;13(3): 263–276. [ Google Scholar ]
- 16. Robbins MC, Nolan JM, Chen D. An improved measure of cognitive salience in free listing tasks: a Marshallese example. Field Methods. 2017;29:395−9:395. [ Google Scholar ]
- 17. Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006; 18: 59–82. [ Google Scholar ]
- 18. Coenen M, Stamm TA, Stucki G, Cieza A. Individual interviews and focus groups in patients with rheumatoid arthritis: A comparison of two qualitative methods. Quality of Life Research. 2012; 21:359–70. doi: 10.1007/s11136-011-9943-2 [ DOI ] [ PubMed ] [ Google Scholar ]
- 19. Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010; 25:1229–45. doi: 10.1080/08870440903194015 [ DOI ] [ PubMed ] [ Google Scholar ]
- 20. Hagaman A K, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on Guest, Bunce, and Johnson’s (2006) landmark study. Field Methods. 2017; 29:23−41. [ Google Scholar ]
- 21. Galvin R. How many interviews are enough? Do qualitative interviews in building energy consumption research produce reliable knowledge? J of Building Engineering, 2015; 1: 2–12. [ Google Scholar ]
- 22. Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015; 18: 669–684. [ Google Scholar ]
- 23. Bernard HR, Killworth PD. Sampling in time allocation research. Ethnology. 1993; 32:207–15. [ Google Scholar ]
- 24. Tran VT, Porcher R, Tran VC, Ravaud P. Predicting data saturation in qualitative surveys with mathematical models from ecological research. J Clin Epi. 2017; February;82:71–78.e2. doi: 10.1016/j.jclinepi.2016.10.001 Epub 2016 Oct 24. [ DOI ] [ PubMed ] [ Google Scholar ]
- 25. Lowe A, Norris AC, Farris AJ, Babbage DR. Quantifying thematic saturation in qualitative data analysis. Field Methods. 2018; 30 (in press, online first: http://journals.sagepub.com/doi/full/10.1177/1525822X17749386 ). [ Google Scholar ]
- 26. Van Rijnsoever FJ. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research. PLoS ONE. 2017; 12: e0181689 doi: 10.1371/journal.pone.0181689 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 27. Weller S. New data on intracultural variability: the hot-cold concept of medicine and illness. Hum Organ. 1983; 42: 249–257. [ Google Scholar ]
- 28. Brewer DD. Cognitive indicators of knowledge in semantic domains. J of Quant Anthro. 1995; 5: 107–128. [ Google Scholar ]
- 29. Brewer DD. Supplementary interviewing techniques to maximize output in free listing tasks. Field Methods. 2002; 14: 108–118. [ Google Scholar ]
- 30. Gravlee CC, Bernard HR, Maxwell CR, Jacobsohn A. Mode effects in free-list elicitation: comparing oral, written, and web-based data collection. Soc Sci Comput Rev. 2013; 31: 119–132. [ Google Scholar ]
- 31. Pennec F, Wencelius J, Garine E, Bohbot H. Flame v1.2—Free-list analysis under Microsoft Excel (Software and English User Guide), 2014. Available from: https://www.researchgate.net/publication/261704624_Flame_v12_-_Free-List_Analysis_Under_Microsoft_Excel_Software_and_English_User_Guide (10/19/17) [ Google Scholar ]
- 32. Borgatti SP. Software review: FLAME (Version 1.1). Field Methods. 2015; 27:199–205. [ Google Scholar ]
- 33. SAS Institute Inc. GENMOD SAS/STAT® 13.1 User’s Guide. Cary, NC: SAS Institute Inc., 2013. [ Google Scholar ]
- 34. Bishop Y, Feinberg S, Holland P. Discrete multivariate statistics: Theory and practice, MIT Press, Cambridge, 1975. [ Google Scholar ]
- 35. Baillargeon S, Rivest LP. Rcapture: loglinear models for capture-recapture in R. J Statistical Software. 2007; 19: 1–31. [ Google Scholar ]
- 36. Rivest LP, Baillargeon S: Package ‘Rcapture’ Loglinear models for capture-recapture experiments, in CRAN, R, Documentation Feb 19, 2015.
- 37. Weller SC, Romney AK. Systematic data collection (Vol. 10). Sage, 1988. [ Google Scholar ]
- 38. Morse M. Theoretical saturation In Lewis-Beck MS, Bryman A, Liao TF, editors. The Sage encyclopedia of social science research methods. Thousand Oaks, CA: Sage, 2004, p1123 Available from http://sk.sagepub.com/reference/download/socialscience/n1011.pdf [ Google Scholar ]
- 39. Tay, I. To what extent should data saturation be used as a quality criterion in qualitative research? Linked in 2014. Available from https://www.linkedin.com/pulse/20140824092647-82509310-to-what-extent-should-data-saturation-be-used-as-a-quality-criterion-in-qualitative-research
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data availability statement.
- View on publisher site
- PDF (1.9 MB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
- Harvard Library
- Research Guides
- Faculty of Arts & Sciences Libraries
Library Support for Qualitative Research
- Interview Research
General Handbooks and Overviews
Qualitative research communities.
- Types of Interviews
- Recruiting & Engaging Participants
- Interview Questions
- Conducting Interviews
- Recording & Transcription
- Data Analysis
- Managing Interview Data
- Finding Extant Interviews
- Past Workshops on Interview Research
- Methodological Resources
- Remote & Virtual Fieldwork
- Data Management & Repositories
- Campus Access
- Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research.
- InterViews by Steinar Kvale Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism. Having established this framework, the author then analyzes the seven stages of the interview process - from designing a study to writing it up.
- Practical Evaluation by Michael Quinn Patton Surveys different interviewing strategies, from, a) informal/conversational, to b) interview guide approach, to c) standardized and open-ended, to d) closed/quantitative. Also discusses strategies for wording questions that are open-ended, clear, sensitive, and neutral, while supporting the speaker. Provides suggestions for probing and maintaining control of the interview process, as well as suggestions for recording and transcription.
- The SAGE Handbook of Interview Research by Amir B. Marvasti (Editor); James A. Holstein (Editor); Jaber F. Gubrium (Editor); Karyn D. McKinney (Editor) The new edition of this landmark volume emphasizes the dynamic, interactional, and reflexive dimensions of the research interview. Contributors highlight the myriad dimensions of complexity that are emerging as researchers increasingly frame the interview as a communicative opportunity as much as a data-gathering format. The book begins with the history and conceptual transformations of the interview, which is followed by chapters that discuss the main components of interview practice. Taken together, the contributions to The SAGE Handbook of Interview Research: The Complexity of the Craft encourage readers simultaneously to learn the frameworks and technologies of interviewing and to reflect on the epistemological foundations of the interview craft.
- International Congress of Qualitative Inquiry They host an annual confrerence at the University of Illinois at Urbana-Champaign, which aims to facilitate the development of qualitative research methods across a wide variety of academic disciplines, among other initiatives.
- METHODSPACE An online home of the research methods community, where practicing researchers share how to make research easier.
- Social Research Association, UK The SRA is the membership organisation for social researchers in the UK and beyond. It supports researchers via training, guidance, publications, research ethics, events, branches, and careers.
- Social Science Research Council The SSRC administers fellowships and research grants that support the innovation and evaluation of new policy solutions. They convene researchers and stakeholders to share evidence-based policy solutions and incubate new research agendas, produce online knowledge platforms and technical reports that catalog research-based policy solutions, and support mentoring programs that broaden problem-solving research opportunities.
- << Previous: Taguette
- Next: Types of Interviews >>
Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Chapter 13: Interviews
Danielle Berkovic
Learning outcomes
Upon completion of this chapter, you should be able to:
- Understand when to use interviews in qualitative research.
- Develop interview questions for an interview guide.
- Understand how to conduct an interview.
What are interviews?
An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a participant’s subjective view of an experience and are also considered suitable for the exploration of sensitive topics.
What are the different types of interviews?
There are four main types of interviews:
- Key stakeholder: A key stakeholder interview aims to explore one issue in detail with a person of interest or importance concerning the research topic. 3 Key stakeholder interviews seek the views of experts on some cultural, political or health aspects of the community, beyond their personal beliefs or actions. An example of a key stakeholder is the Chief Health Officer of Victoria (Australia’s second-most populous state) who oversaw the world’s longest lockdowns in response to the COVID-19 pandemic.
- Dyad: A dyad interview aims to explore one issue in a level of detail with a dyad (two people). This form of interviewing is used when one participant of the dyad may need some support or is not wholly able to articulate themselves (e.g. people with cognitive impairment, or children). Independence is acknowledged and the interview is analysed as a unit. 4
- Narrative: A narrative interview helps individuals tell their stories, and prioritises their own perspectives and experiences using the language that they prefer. 5 This type of interview has been widely used in social research but is gaining prominence in health research to better understand person-centred care, for example, negotiating exercise and food abstinence whilst living with Type 2 diabetes. 6,7
- Life history: A life history interview allows the researcher to explore a person’s individual and subjective experiences within a history of the time framework. 8 Life history interviews challenge the researcher to understand how people’s current attitudes, behaviours and choices are influenced by previous experiences or trauma. Life history interviews have been conducted with Holocaust survivors 9 and youth who have been forcibly recruited to war. 10
Table 13.4 provides a summary of four studies, each adopting one of these types of interviews.
Interviewing techniques
There are two main interview techniques:
- Semi-structured: Semi-structured interviewing aims to explore a few issues in moderate detail, to expand the researcher’s knowledge at some level. 11 Semi-structured interviews give the researcher the advantage of remaining reasonably objective while enabling participants to share their perspectives and opinions. The researcher should create an interview guide with targeted open questions to direct the interview. As examples, semi-structured interviews have been used to extend knowledge of why women might gain excess weight during pregnancy, 12 and to update guidelines for statin uptake. 13
- In-depth: In-depth interviewing aims to explore a person’s subjective experiences and feelings about a particular topic. 14 In-depth interviews are often used to explore emotive (e.g. end-of-life care) 15 and complex (e.g. adolescent pregnancy) topics. 16 The researcher should create an interview guide with selected open questions to ask of the participant, but the participant should guide the direction of the interview more than in a semi-structured setting. In-depth interviews value participants’ lived experiences and are frequently used in phenomenology studies (as described in Chapter 6) .
When to use the different types of interview s
The type of interview a researcher uses should be determined by the study design, the research aims and objectives, and participant demographics. For example, if conducting a descriptive study, semi-structured interviews may be the best method of data collection. As explained in Chapter 5 , descriptive studies seek to describe phenomena, rather than to explain or interpret the data. A semi-structured interview, which seeks to expand upon some level of existing knowledge, will likely best facilitate this.
Similarly, if conducting a phenomenological study, in-depth interviews may be the best method of data collection. As described in Chapter 6 , the key concept of phenomenology is the individual. The emphasis is on the lived experience of that individual and the person’s sense-making of those experiences. Therefore, an in-depth interview is likely best placed to elicit that rich data.
While some interview types are better suited to certain study designs, there are no restrictions on the type of interview that may be used. For example, semi-structured interviews provide an excellent accompaniment to trial participation (see Chapter 11 about mixed methods), and key stakeholder interviews, as part of an action research study, can be used to define priorities, barriers and enablers to implementation.
How do I write my interview questions?
An interview aims to explore the experiences, understandings, opinions and motivations of research participants. The general rule is that the interviewee should speak for 80 per cent of the interview, and the interviewer should only be asking questions and clarifying responses, for about 20 per cent of the interview. This percentage may differ depending on the interview type; for example, a semi-structured interview involves the researcher asking more questions than in an in-depth interview. Still, to facilitate free-flowing responses, it is important to use open-ended language to encourage participants to be expansive in their responses. Examples of open-ended terms include questions that start with ‘who’, ‘how’ and ‘where’.
The researcher should avoid closed-ended questions that can be answered with yes or no, and limit conversation. For example, asking a participant ‘Did you have this experience?’ can elicit a simple ‘yes’, whereas asking them to ‘Describe your experience’, will likely encourage a narrative response. Table 13.1 provides examples of terminology to include and avoid in developing interview questions.
Table 13.1. Interview question formats to use and avoid
How long should my interview be.
There is no rule about how long an interview should take. Different types of interviews will likely run for different periods of time, but this also depends on the research question/s and the type of participant. For example, given that a semi-structured interview is seeking to expand on some previous knowledge, the interview may need no longer than 30 minutes, or up to one hour. An in-depth interview seeks to explore a topic in a greater level of detail and therefore, at a minimum, would be expected to last an hour. A dyad interview may be as short as 15 minutes (e.g. if the dyad is a person with dementia and a family member or caregiver) or longer, depending on the pairing.
Designing your interview guide
To figure out what questions to ask in an interview guide, the researcher may consult the literature, speak to experts (including people with lived experience) about the research and draw on their current knowledge. The topics and questions should be mapped to the research question/s, and the interview guide should be developed well in advance of commencing data collection. This enables time and opportunity to pilot-test the interview guide. The pilot interview provides an opportunity to explore the language and clarity of questions, the order and flow of the guide and to determine whether the instructions are clear to participants both before and after the interview. It can be beneficial to pilot-test the interview guide with someone who is not familiar with the research topic, to make sure that the language used is easily understood (and will be by participants, too). The study design should be used to determine the number of questions asked and the duration of the interview should guide the extent of the interview guide. The participant type may also determine the extent of the interview guide; for example, clinicians tend to be time-poor and therefore shorter, focused interviews are optimal. An interview guide is also likely to be shorter for a descriptive study than a phenomenological or ethnographic study, given the level of detail required. Chapter 5 outlined a descriptive study in which participants who had undergone percutaneous coronary intervention were interviewed. The interview guide consisted of four main questions and subsequent probing questions, linked to the research questions (see Table 13.2). 17
Table 13.2. Interview guide for a descriptive study
Table 13.3 is an example of a larger and more detailed interview guide, designed for the qualitative component of a mixed-methods study aiming to examine the work and financial effects of living with arthritis as a younger person. The questions are mapped to the World Health Organization’s International Classification of Functioning, Disability, and Health, which measures health and disability at individual and population levels. 18
Table 13.3. Detailed interview guide
It is important to create an interview guide, for the following reasons:
- The researcher should be familiar with their research questions.
- Using an interview guide will enable the incorporation of feedback from the piloting process.
- It is difficult to predict how participants will respond to interview questions. They may answer in a way that is anticipated or they may provide unanticipated insights that warrant follow-up. An interview guide (a physical or digital copy) enables the researcher to note these answers and follow-up with appropriate inquiry.
- Participants will likely have provided heterogeneous answers to certain questions. The interview guide enables the researcher to note similarities and differences across various interviews, which may be important in data analysis.
- Even experienced qualitative researchers get nervous before an interview! The interview guide provides a safety net if the researcher forgets their questions or needs to anticipate the next question.
Setting up the interview
In the past, most interviews were conducted in person or by telephone. Emerging technologies promote easier access to research participation (e.g. by people living in rural or remote communities, or for people with mobility limitations). Even in metropolitan settings, many interviews are now conducted electronically (e.g. using videoconferencing platforms). Regardless of your interview setting, it is essential that the interview environment is comfortable for the participant. This process can begin as soon as potential participants express interest in your research. Following are some tips from the literature and our own experiences of leading interviews:
- Answer questions and set clear expectations . Participating in research is not an everyday task. People do not necessarily know what to expect during a research interview, and this can be daunting. Give people as much information as possible, answer their questions about the research and set clear expectations about what the interview will entail and how long it is expected to last. Let them know that the interview will be recorded for transcription and analysis purposes. Consider sending the interview questions a few days before the interview. This gives people time and space to reflect on their experiences, consider their responses to questions and to provide informed consent for their participation.
- Consider your setting . If conducting the interview in person, consider the location and room in which the interview will be held. For example, if in a participant’s home, be mindful of their private space. Ask if you should remove your shoes before entering their home. If they offer refreshments (which in our experience many participants do), accept it with gratitude if possible. These considerations apply beyond the participant’s home; if using a room in an office setting, consider privacy and confidentiality, accessibility and potential for disruption. Consider the temperature as well as the furniture in the room, who may be able to overhear conversations and who may walk past. Similarly, if interviewing by phone or online, take time to assess the space, and if in a house or office that is not quiet or private, use headphones as needed.
- Build rapport. The research topic may be important to participants from a professional perspective, or they may have deep emotional connections to the topic of interest. Regardless of the nature of the interview, it is important to remember that participants are being asked to open up to an interviewer who is likely to be a stranger. Spend some time with participants before the interview, to make sure that they are comfortable. Engage in some general conversation, and ask if they have any questions before you start. Remember that it is not a normal part of someone’s day to participate in research. Make it an enjoyable and/or meaningful experience for them, and it will enhance the data that you collect.
- Let participants guide you. Oftentimes, the ways in which researchers and participants describe the same phenomena are different. In the interview, reflect the participant’s language. Make sure they feel heard and that they are willing and comfortable to speak openly about their experiences. For example, our research involves talking to older adults about their experience of falls. We noticed early in this research that participants did not use the word ‘fall’ but would rather use terms such as ‘trip’, ‘went over’ and ‘stumbled’. As interviewers we adopted the participant’s language into our questions.
- Listen consistently and express interest. An interview is more complex than a simple question-and-answer format. The best interview data comes from participants feeling comfortable and confident to share their stories. By the time you are completing the 20th interview, it can be difficult to maintain the same level of concentration as with the first interview. Try to stay engaged: nod along with your participants, maintain eye contact, murmur in agreement and sympathise where warranted.
- The interviewer is both the data collector and the data collection instrument. The data received is only as good as the questions asked. In qualitative research, the researcher influences how participants answer questions. It is important to remain reflexive and aware of how your language, body language and attitude might influence the interview. Being rested and prepared will enhance the quality of the questions asked and hence the data collected.
- Avoid excessive use of ‘why’. It can be challenging for participants to recall why they felt a certain way or acted in a particular manner. Try to avoid asking ‘why’ questions too often, and instead adopt some of the open language described earlier in the chapter.
After your interview
When you have completed your interview, thank the participant and let them know they can contact you if they have any questions or follow-up information they would like to provide. If the interview has covered sensitive topics or the participant has become distressed throughout the interview, make sure that appropriate referrals and follow-up are provided (see section 6).
Download the recording from your device and make sure it is saved in a secure location that can only be accessed by people on the approved research team (see Chapters 35 and 36).
It is important to know what to do immediately after each interview is completed. Interviews should be transcribed – that is, reproduced verbatim for data analysis. Transcribing data is an important step in the process of analysis, but it is very time-consuming; transcribing a 60-minute interview can take up to 8 hours. Data analysis is discussed in Section 4.
Table 13.4. Examples of the four types of interviews
Interviews are the most common data collection technique in qualitative research. There are four main types of interviews; the one you choose will depend on your research question, aims and objectives. It is important to formulate open-ended interview questions that are understandable and easy for participants to answer. Key considerations in setting up the interview will enhance the quality of the data obtained and the experience of the interview for the participant and the researcher.
- Gill P, Stewart K, Treasure E, Chadwick B. Methods of data collection in qualitative research: interviews and focus groups. Br Dent J . 2008;204(6):291-295. doi:10.1038/bdj.2008.192
- DeJonckheere M, Vaughn LM. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Community Health . 2019;7(2):e000057. doi:10.1136/fmch-2018-000057
- Nyanchoka L, Tudur-Smith C, Porcher R, Hren D. Key stakeholders’ perspectives and experiences with defining, identifying and displaying gaps in health research: a qualitative study. BMJ Open . 2020;10(11):e039932. doi:10.1136/bmjopen-2020-039932
- Morgan DL, Ataie J, Carder P, Hoffman K. Introducing dyadic interviews as a method for collecting qualitative data. Qual Health Res . 2013;23(9):1276-84. doi:10.1177/1049732313501889
- Picchi S, Bonapitacola C, Borghi E, et al. The narrative interview in therapeutic education. The diabetic patients’ point of view. Acta Biomed . Jul 18 2018;89(6-S):43-50. doi:10.23750/abm.v89i6-S.7488
- Stuij M, Elling A, Abma T. Negotiating exercise as medicine: Narratives from people with type 2 diabetes. Health (London) . 2021;25(1):86-102. doi:10.1177/1363459319851545
- Buchmann M, Wermeling M, Lucius-Hoene G, Himmel W. Experiences of food abstinence in patients with type 2 diabetes: a qualitative study. BMJ Open . 2016;6(1):e008907. doi:10.1136/bmjopen-2015-008907
- Jessee E. The Life History Interview. Handbook of Research Methods in Health Social Sciences . 2018:1-17:Chapter 80-1.
- Sheftel A, Zembrzycki S. Only Human: A Reflection on the Ethical and Methodological Challenges of Working with “Difficult” Stories. The Oral History Review . 2019;37(2):191-214. doi:10.1093/ohr/ohq050
- Harnisch H, Montgomery E. “What kept me going”: A qualitative study of avoidant responses to war-related adversity and perpetration of violence by former forcibly recruited children and youth in the Acholi region of northern Uganda. Soc Sci Med . 2017;188:100-108. doi:10.1016/j.socscimed.2017.07.007
- Ruslin., Mashuri S, Rasak MSA, Alhabsyi M, Alhabsyi F, Syam H. Semi-structured Interview: A Methodological Reflection on the Development of a Qualitative Research Instrument in Educational Studies. IOSR-JRME . 2022;12(1):22-29. doi:10.9790/7388-1201052229
- Chang T, Llanes M, Gold KJ, Fetters MD. Perspectives about and approaches to weight gain in pregnancy: a qualitative study of physicians and nurse midwives. BMC Pregnancy & Childbirth . 2013;13(47)doi:10.1186/1471-2393-13-47
- DeJonckheere M, Robinson CH, Evans L, et al. Designing for Clinical Change: Creating an Intervention to Implement New Statin Guidelines in a Primary Care Clinic. JMIR Hum Factors . 2018;5(2):e19. doi:10.2196/humanfactors.9030
- Knott E, Rao AH, Summers K, Teeger C. Interviews in the social sciences. Nature Reviews Methods Primers . 2022;2(1)doi:10.1038/s43586-022-00150-6
- Bergenholtz H, Missel M, Timm H. Talking about death and dying in a hospital setting – a qualitative study of the wishes for end-of-life conversations from the perspective of patients and spouses. BMC Palliat Care . 2020;19(1):168. doi:10.1186/s12904-020-00675-1
- Olorunsaiye CZ, Degge HM, Ubanyi TO, Achema TA, Yaya S. “It’s like being involved in a car crash”: teen pregnancy narratives of adolescents and young adults in Jos, Nigeria. Int Health . 2022;14(6):562-571. doi:10.1093/inthealth/ihab069
- Ayton DR, Barker AL, Peeters G, et al. Exploring patient-reported outcomes following percutaneous coronary intervention: A qualitative study. Health Expect . 2018;21(2):457-465. doi:10.1111/hex.12636
- World Health Organization. International Classification of Functioning, Disability and Health (ICF). WHO. https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health#:~:text=ICF%20is%20the%20WHO%20framework,and%20measure%20health%20and%20disability.
- Cuthbertson J, Rodriguez-Llanes JM, Robertson A, Archer F. Current and Emerging Disaster Risks Perceptions in Oceania: Key Stakeholders Recommendations for Disaster Management and Resilience Building. Int J Environ Res Public Health . 2019;16(3)doi:10.3390/ijerph16030460
- Bannon SM, Grunberg VA, Reichman M, et al. Thematic Analysis of Dyadic Coping in Couples With Young-Onset Dementia. JAMA Netw Open . 2021;4(4):e216111. doi:10.1001/jamanetworkopen.2021.6111
- McGranahan R, Jakaite Z, Edwards A, Rennick-Egglestone S, Slade M, Priebe S. Living with Psychosis without Mental Health Services: A Narrative Interview Study. BMJ Open . 2021;11(7):e045661. doi:10.1136/bmjopen-2020-045661
- Gutiérrez-García AI, Solano-Ruíz C, Siles-González J, Perpiñá-Galvañ J. Life Histories and Lifelines: A Methodological Symbiosis for the Study of Female Genital Mutilation. Int J Qual Methods . 2021;20doi:10.1177/16094069211040969
Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Danielle Berkovic is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
IMAGES
VIDEO
COMMENTS
This article, aimed at the novice researcher, is written to address the increased need to develop research protocols or interview guides to meet the requirements set by IRBs and human...
Asking open-ended questions during interviews and focus groups is critical to qualitative research. Open-ended questions allow you to explore topics in-depth, uncover deeper insights, and gain valuable participant feedback.
An interview guide is a structured framework used in qualitative research to direct the conversation during interviews. It is an essential tool for maintaining focus while allowing for flexibility during the interview. The guide typically consists of a list of open-ended questions or key topics for exploring participants' experiences, opinions ...
An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers. Unstructured interviews are by definition open-ended.
Figure 11.1. Types of Interviewing Questions.
99 Citations. 190 Altmetric. Metrics. Abstract. In-depth interviews are a versatile form of qualitative data collection used by researchers across the social sciences. They allow individuals to...
An open-ended question is a question that allows the respondent to express himself or herself freely on a given subject. This type of question is, as opposed to closed-ended questions, non-directive and allows respondents to use their own terms and direct their response at their convenience.
Open-ended questions are used alone or in combination with other interviewing techniques to explore topics in depth, to understand processes, and to identify potential causes of observed correlations.
After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism.
Learning outcomes. Upon completion of this chapter, you should be able to: Understand when to use interviews in qualitative research. Develop interview questions for an interview guide. Understand how to conduct an interview. What are interviews?