How to Know the Difference Between Error and Bias
Research experts have always emphasized the importance of obtaining more accurate information in surveys through the elimination of error and bias. However, most surveyors and research experts do not have a clear understanding of the different types of survey error to begin with! Most professional researchers throw terms like response bias or nonresponse error around the boardroom without a full comprehension of their meaning. That is why we have decided to go over the different natures of error and bias, as well as their impacts on surveys.
Defining Error and Bias
In survey research, error can be defined as any difference between the average values that were obtained through a study and the true average values of the population being targeted. Simply put, error describes how much the results of a study missed the mark, by encompassing all the flaws in a research study. Take for example that your study showed 20% of people’s favourite ice cream is chocolate flavoured, but in actuality chocolate is 25% of people’s favourite ice cream flavour. This difference could be from a whole range of different biases and errors but the total level of error in your study would be 5%.
Whereas error makes up all flaws in a study’s results, bias refers only to error that is systematic in nature. Research is bias when it is gathered in a way that makes the data’s value systematically different from the true value of the population of interest. Survey research includes an incredible spectrum of different types of bias, including researcher bias, survey bias, respondent bias, and nonresponse bias. Whether it is in the selection process, the way questions are written, or the respondents’ desire to answer in a certain way, bias can be found in almost any survey.
For example, including a question like “Do you drive recklessly?” in a public safety survey would create systematic error and therefore be bias. The reason it is considered systematic is that many respondents would answer the question falsely in one direction by selecting “No” even if they are a bad driver.
The Effect of Random Sampling Error and Bias on Research
But what about error that is not systematic in nature? This is called random sampling error and is due to samples being an imperfect representation of the population of interest. Unfortunately no matter how carefully you select your sample or how many people complete your survey, there will always be a percentage of error that has nothing to do with bias. This is unavoidable in the world of probability because, as long as your survey is not a census (collecting responses from every member of the population), you cannot be certain that the true values resulting from your sample are the same as the true values of the population.
However, random sampling error can be easily measured through the use of statistics. Whenever a researcher conducts a probability survey they must include a margin of error and a confidence level. This allows any person to understand just how much effect random sampling error could have on a study’s results.
Bias, on the other hand, cannot be measured using statistics due to the fact that it comes from the research process itself. Because of its systematic nature, bias slants the data in an artificial direction that will provide false information to the researcher. For this reason, eliminating bias should be the number one priority of all researchers. Over the next few articles, we will discuss the several different forms of bias and how to avoid them in your surveys.
Check out the next article on our discussion on error and bias: How to Avoid Nonresponse Error
Latest posts by FluidSurveys Team (see all)
- It’s All About Timing –When to Send your Survey Email Invites? - April 1, 2015
- Making Questions Required -How Online Surveyors Ruin their Results with One Click! - February 26, 2015
- Quota Sampling Effectively -How to get a Representative Sample for Your Online Surveys - February 19, 2015
- The Power of Repetition -How to Measure Your Organizations Progress with Survey Research - February 6, 2015