What Is Nonresponse Bias?| Definition & Example

Nonresponse bias happens when those unwilling or unable to take part in a research study are different from those who do.

In other words, this bias occurs when respondents and nonrespondents categorically differ in ways that impact the research. As a result, the sample is no longer representative of the population as a whole.

Example: Nonresponse bias
Suppose you are researching workload among managers in a supermarket chain. You decide to collect your data via a survey. Due to constraints on their time, managers with the largest workload are less likely to answer your survey questions.

This may lead to a biased sample, as those most likely to answer are the managers with less busy schedules. Consequently, your results are likely to show that manager workload in the supermarket chain is not very high—something that may not, in fact, be true.

What is nonresponse bias?

Nonresponse bias can occur when individuals who refuse to take part in a study, or who drop out before the study is completed, are systematically different from those who participate fully. Nonresponse prevents the researcher from collecting data for all units in the sample. It is a common source of error, particularly in survey research.

Causes of nonresponse include:

  • Poor survey design or errors in data collection
  • Wrong target audience (e.g., asking residents of an elderly home about participation in extreme sports)
  • Asking questions likely to be skipped (e.g., sensitive questions about drugs, sexual behavior, or infidelity)
  • Inability to contact potential respondents (e.g., when your sample includes individuals who don’t have a steady home address)
  • Conducting multiple waves of data collection (e.g., asking the same respondents to fill in the same survey at different points in time)
  • Not taking into account linguistic or technical difficulties (e.g., a language barrier)

Types of nonresponse

Usually, a distinction is made between two types of nonresponse:

  1. Unit nonresponse encompasses instances where all data for a sampled unit is missing—i.e., a number of respondents didn’t complete the survey at all (missing data).
  2. Item nonresponse occurs where only part of the data could not be obtained—i.e., a number of respondents selectively skipped the same survey question.

It is important to keep in mind that nonresponse bias is always associated with a specific variable (like manager workload in the previous example). Respondents and nonrespondents differ with respect to that variable (workload) specifically.

Because managers’ decision to participate or not in the survey relates to their workload, the data is not randomized, leading respondents and nonrespondents to differ in a way that is significant to the research.

Components of nonresponse

Nonresponse bias consists of two components:

  • Nonresponse rate
  • Differences between respondents and nonrespondents

The extent of bias depends on both the nonresponse rate and the extent to which nonrespondents differ from respondents on the variable(s) of interest. This means that a high level of nonresponse alone does not necessarily lead to research bias, as nonresponse can also be due to random error.

Example: When does nonresponse lead to bias?
Suppose you are running a survey on information literacy. You notice that some respondents miss the email that contains the link to the survey. As a result, they never get the chance to answer your questions.

Does this mean that nonresponse bias is present in your research?

It may, but only if:

  • The individuals who missed the survey share a common characteristic that differentiates them from those who did receive the survey and filled it in

         and

  • This common characteristic is directly relevant to your research question

If nonrespondents missed your email due to poor computer skills, then this makes them a distinct group in terms of a unifying characteristic (i.e., poor computer skills). This skill is relevant to your research (information literacy).

If nonrespondents missed your email simply because it ended up in their spam folder, then this is due to random error. In this instance, nonrespondents don’t share any characteristics that set them apart from respondents.

Response rate and nonresponse bias

The response rate, or the percentage of sampled units who filled in a survey, can indicate the amount of nonresponse present in your data. For example, a survey with a 70% response rate has a 30% nonresponse rate.

The response rate is often used to estimate the magnitude of nonresponse bias. The assumption is that the higher the response rate, the lower the nonresponse bias.

However, keep in mind that a low response rate (or high nonresponse rate) is only an indication of the potential for nonresponse bias. Nonresponse bias may be low even when the response rate is low, provided that the nonresponse is random. This occurs when the differences between respondents and nonrespondents on that particular variable are minor.

Tip
As a rule of thumb, the lower the response rate, the greater the likelihood of nonresponse bias. Nonresponse bias becomes an issue when the response rate falls below 70%.

Why is nonresponse bias a problem?

Nonresponse bias can lead to several issues:

  • Because the obtained sample size doesn’t correspond to the intended sample size, nonresponse bias increases sampling error.
  • Results are not representative of the target population, as respondents are systematically different from nonrespondents.
  • Researchers must devise more elaborate or time-intensive data collection procedures to achieve the requisite response rate and sample size. This, in turn, increases the cost of research.
Note
Keep in mind that nonresponse bias is not the opposite of response bias. Response bias refers to a number of factors that may lead survey respondents to answer untruthfully.

Nonresponse bias example

Nonresponse bias is a common source of bias in research, especially in studies related to health.

Example: Nonresponse bias in health surveys
In a case-control study assessing the link between smoking and heart disease, the selected sample is invited to participate by filling in a survey sent via mail.

Unfortunately, nonresponse is higher among people with heart disease, leading to an underestimation of the association between smoking and heart disease. This is a common problem in health surveys.

Studies generally show that respondents report better health outcomes and more positive health-related behaviors than nonrespondents. They often report lower alcohol consumption, less risky sexual behavior, more physical activity, etc.

This suggests that people with poorer health tend to avoid participating in health surveys. As a result, nonresponse bias can affect the results.

How to minimize nonresponse bias

It’s possible to minimize nonresponse by designing the survey in a way that obtains the highest possible response rate. There are several steps you can take that will help you in that direction:

During data collection

To minimize nonresponse bias during data collection, first try to identify individuals in the sample that are less likely to participate in your survey. These could be individuals who are hard to reach or hard to motivate.

It’s a good idea to prepare strategies that may incentivize their cooperation. Some ideas could include:

  • Offering incentives, monetary or otherwise (e.g., gifts, donations, raffles). Incentives motivate respondents and make them feel that the survey is worth their time.
  • Considering how you contact sample units and what is best suited to your research. Before you launch your survey, think about the total number of contacts you need to have, the timing of the first contact, the interval between contacts, etc. For example, personal contact through face-to-face survey interviews generally increases response rates but may not work for all potential respondents.
  • Ensuring respondents’ anonymity and providing ethical considerations. Surveys that require personal or sensitive information should include instructions that make respondents feel at ease, reassuring them that their answers will be kept strictly confidential.
  • Keeping your data collection flexible. Consider using multiple modes of data collection, such as online and offline. If data collection is done in person, participants should be able to schedule the appointment whenever convenient for them.
  • Sending reminders. Sending a few reminder emails during your data collection period is an effective way to gather more responses. For example, you can send your first reminder halfway through the data collection period and a second near the end.
  • Making participation mandatory instead of voluntary whenever possible. For example, asking students to fill in a survey during class time is more effective than inviting them to fill it in via a letter sent to their home address.

During data analysis

During data analysis, the goal is to identify the magnitude of nonresponse bias. Luckily, the nonresponse rate is easy to estimate. However, identifying whether the difference between respondents and nonrespondents is due to a particular characteristic is not so easy.

There are a number of ways you can approach this problem, including:

  • Comparing early respondents to late respondents. Later respondents can often resemble nonrespondents in terms of unifying characteristics. In this way, you can infer the characteristics of nonrespondents.
  • Using information that is already available for the entire survey sample (respondents and nonrespondents). Relevant information may already be included in the sampling frame itself—for example, sociodemographic characteristics like age or gender, employment data, or information about the duration of membership in the case of a survey of members of a club or sports team. The prerequisite here is that the collected information is related to the survey variables of interest and can be linked to participation behavior.
  • Using follow-up surveys to collect at least some key variables, either from all nonrespondents or from a randomly selected sample of them. The drawback here is the additional cost of the survey.

Other types of research bias

Frequently asked questions

What is the difference between response and nonresponse bias?

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews. These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen because people are either not willing or not able to participate.

Why is nonresponse bias a problem for researchers?

Nonresponse bias occurs when those who opt out of a survey are systematically different from those who complete it, in ways that are significant for the research study.

Because of this, the obtained sample is not what the researchers aimed for and is not representative of the population. This is a problem, as it can invalidate the results.

Sources in this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

This Scribbr article

Nikolopoulou, K. (2024, June 11). What Is Nonresponse Bias?| Definition & Example. Scribbr. Retrieved December 3, 2024, from https://www.scribbr.com/research-bias/nonresponse-bias/

Sources

Bose, Jonaki. (2001). Nonresponse bias analyses at the national center for education statistics.

Koch, A., & Blohm, M. (2016). Nonresponse Bias. GESIS Survey Guidelines. Mannheim, Germany: GESIS – Leibniz Institute for the Social Sciences. doi: 10.15465/gesis-sg_en_004

Is this article helpful?
Kassiani Nikolopoulou

Kassiani has an academic background in Communication, Bioeconomy and Circular Economy. As a former journalist she enjoys turning complex scientific information into easily accessible articles to help students. She specializes in writing about research methods and research bias.