Calculating the Right Survey Sample Size

Calculating the right sample size is crucial to gaining accurate information! In fact, your survey’s confidence level and margin of error almost solely depends on the number of responses you received. That’s why FluidSurveys designed its very own Survey Sample Size Calculator. But before you check it out, I wanted to give you a quick look at how your sample size can affect your results.

Explaining Confidence Levels and Margin of Errors

The first thing to understand is the difference between confidence levels and margins of error. Simply put, a confidence level describes how sure you can be that your results are accurate, whereas the margin of error shows the range the survey results would fall between if our confidence level held true. A standard survey will usually have a confidence level of 95% and margin of error of 5%.

Here is an example of a confidence level and margin of error at work. Let’s say we own a magazine with 1000 subscribers and we want to measure their satisfaction. After plugging in our information in the Survey Sample Size Calculator, we know that a sample size of 278 people gives us a confidence level of 95% with a margin of error of 5%. Our 95% confidence level states that 19 out of 20 times we conduct this survey our results would land within our margin of error. Our 5% margin of error says that if we surveyed all 1000 subscribers, the results could differ with a score of minus 5% or plus 5% from its original score.

For the purpose of this example, let’s say we asked our respondents to rate their satisfaction with our magazine on a scale from 0-10 and it resulted in a final average score of 8.6. With our allotted margin of error and confidence level we can be 95% certain that if we surveyed all 1000 subscribers that our average score would be between 8.1-9.1.

What Happens When Your Sample Size is too Low?

Now that we know how both margins of error and confidence levels affect the accuracy of results, let’s take a look at what happens when the sample size changes. The lower your sample size, the higher your margin of error and lower your confidence level. This means that your data is becoming less reliable.

If we continue with our example and decide to lower our number of responses to 158, we’ll see a significant drop in our confidence level. Now our level of confidence has lowered to 90%, with a margin of error of 6%. So with the same satisfaction score of 8.6, we’d now only have a 9 in 10 chance of our results falling between a score of 8.0-9.2 if we surveyed all 1000 subscribers.

What if Your Sample Size is too High?

Theoretically speaking a sample size can never be too high. Unfortunately, it is sometimes much more expensive to incentivize or convince your target audience to take part. This could be expensive, and from a statistical perspective, ultimately frivolous. In some surveys, a high confidence level and low margin of error are easier to achieve based on the availability and size of your target audience. But most surveys, especially those involving the general public, a high number of responses can be difficult to achieve.

For these reasons, there exists the standard confidence level of 95% with a margin of error of either 5% or 2.5%. In the end, attempting to go beyond this level of accuracy could be unrealistic and ultimately a less beneficial priority than focusing on making sure your respondents are valid for your survey and are giving truthful responses.

How does the Calculator Work?

So you’re probably wondering how to figure out how the Calculator determines what your sample size should be. Well, all you need is your desired confidence level and margin of error, as well as the number of people that make up your total population size. After plugging these three numbers into the Survey Sample Size Calculator, it conducts two survey sample size formulas for you and comes up with the appropriate number of responses. But just so you know the math behind it, here are the formulas used to calculate sample size:

  1. Sample Size Calculation:
    Sample Size = (Distribution of 50%) / ((Margin of Error% / Confidence Level Score)Squared)
  2. Finite Population Correction:
    True Sample = (Sample Size X Population) / (Sample Size + Population – 1)

Two things that may need explanation are the confidence level score and the distribution. The confidence level score is the standard deviation value that goes along with your confidence level. In the case of a confidence level of 95%, the confidence level score would equal 1.96. Distribution, on the other hand, reflects how skewed the respondents are on a topic. In the survey world it is almost always safest to stick with a 50% distribution, which is the most conservative.

Now that we cleared that out of the way, I know you’re as excited as I am to do this formula by hand for our example above. So let’s do it! To crosscheck my work, plug in our magazine company’s three values into our survey sample size calculator. Remember, we have a population of 1000, and a desired confidence level of 95% and margin of error of 5%:

  1. Sample Size = (0.5 x (1-0.5)) / ((0.05/1.96)Squared)
    Sample Size = 0.25 / ((0.02551…)Squared)
    Sample Size = 0.25 / 0.00065077…
    Sample Size = 384.16…
  2. True Sample = 384.16… x 1000 / 384.16… + 1000 – 1
    True Sample = 384160.3024… /1383.1603
    True Sample = 277.7409…

When we round our True Sample Size up to the nearest whole person, we get our value of 278 people. Therefore, in order to have a 95% confidence level with a 5% margin of error in our results, we would need to survey at least 278 of our 1000 subscribers.

Check Out Our Survey Sample Size Calculator Right Now!

Like we mentioned earlier, you don’t need to go through this whole formula yourself. Simply click here or go through the FluidSurveys website’s resources to enter our Survey Sample Size Calculator. You’ll be able to determine your desired sample size in a matter of seconds!

Ready to get started, but don’t have a FluidSurveys account? Go ahead and set up your own account by visiting our pricing page!

FluidSurveys Presents

Free Survey Q&A

Join our survey & research expert Rick Penwarden as he answers all of your questions every Wednesday at 1PM EST!


31 Comments

  • Matt says:

    The true sample size equation should be written as: True Sample = Sample Size X Population / (Sample Size + Population – 1) based on your example.

  • Lisa says:

    Very helpful for my work Thanks!

  • Liz says:

    Hi – in your example (satisfaction on a scale of 1-10) is the average of 8.6 a weighted average?

    • RickPenwarden says:

      Hi Liz!

      In the case of my example, the average score is not weighted. It is a number I came up with to show how the different sample sizes would effect its accuracy. When making probability calculations, weighting is usually frowned upon. The reason for this being that you are giving some responses (or data points) more power than others in order to better represent their demographic or segment. This can skew results in unpredictable ways, making probability calculations less reliable.

      Hope this helps!

  • hauns says:

    Hi Rick, I read somewhere that if you have 14 questions on your survey, then its 10 x14 = 140 people required. Is this correct or total nonsense? Thanks in advance.

    • RickPenwarden says:

      Hi Hauns,

      I am sorry to say that the ’10 times the number of questions in a survey’ is not a proper measurement of your sample size. The number of questions has nothing to do with selecting a sample size that will achieve your desired level of confidence and margin of error. In fact, when you calculate a sample size, the resulting number is how many responses EACH question needs.

      So, instead of building a target sample size based on the length of your survey, focus on how large your population is.

      Example: You’re surveying the attendees to a hockey game, let’s say a grand total of 30,000 people, and wanted a margin of error of 5% with a confidence level of 95%. The resulting sample size is 380, meaning each survey question should receive a minimum of 380 responses! This number will never change based on the number of questions in the survey.

      So in short, the 10 times formula is total nonsense. If you have any trouble calculating your sample size visit our sample size calculator, here’s the link:

      http://fluidsurveys.com/survey-sample-size-calculator/

      Hope this helped!

  • aj says:

    What margin of error and confidence level should I use in order to come up with a product sampling scheme where the product population is more than a million.

    • RickPenwarden says:

      Hi aj,
      Your desired margin of error and confidence level has nothing to do with your population size. The margin of error and confidence level represent how sure you would like your results to be. Industry standard for marketing research is a 95% confidence level with a margin of error of 5%. But if you wanted to get more precise you can shrink your margin of error to be 2.5%.
      The only thing to remember is the higher your confidence level and the lower your margin of error the larger your sample size must be. So if you went with the standard your minimum sample size would be 385 people. With your margin of error reduced to 2.5% your sample size would change to a minimum of 1535 people.
      Now all you have to do is choose whether getting that lower margin of error is worth the resources it will take to sample the extra people.
      Hope this helps!

  • ShErlyn Into Binayao says:

    good pm. i would just like to ask what to do when sample size is already fix for a certain barangay for example

    • RickPenwarden says:

      Hi ShErlyn,

      If your sample size is already fixed then you can go ahead and collect responses until you fill the number you set it at.
      What I think you mean is what to do if your population (target audience) is a fixed town or barangay or other predetermined group of people? In this case, just plug the population of your barangay into the population section of the calculator and choose your desired confidence level and margin of error.
      Remember your population is the total number of viable respondents and your sample size is the number of responses you’ve collected for the survey.

      Hope this information helps!

  • Sanks says:

    Does this work working for Random Sampling or it works even for people entering an online survey. Say for example I sent an online satisfaction survey to my department that contains 100 staff, is it alright to use this calculator to determine the exact sample required so that it represents the population?

    • RickPenwarden says:

      Hi Sanks!
      The calculator works perfectly for your staff example. In this case your population would be your 100 staff. If you send all 100 staff a survey invite, they are all in your potential sample. So with a confidence level of 95% with a margin of error of 5% your target sample size would be 80 people.
      Hope this helps!

      • Shanks says:

        Thanks for your reply. Can we use this calculator for Non-Random sampling? Say I have the same 100 staff and it is upto them to take the survey, what is the same size I should be looking for? If not then what calculation to use to get a sample size that would be representative of the population?

        • RickPenwarden says:

          Hey Shanks!

          In your instance, you’re sending a survey to everyone in your population (all 100 staff members receive an invite). That puts them all in equal opportunity to be in your sample pool.

          Random sampling is used when a population is too big and hard to reach everyone, so you randomly choose people out of the large population to participate. Effectively giving everyone an equal chance at becoming part of the data. For example, random digit dialing across the country would be random sampling.

          Though your case isn’t technically random sampling, since every person has a chance to answer the survey, your project still falls under probability sampling, meaning the calculator can still be used. Refer to my previous reply for the formula requiring 80 responses 🙂

          What you should look out for are different ways your sampling style could bias your responses through nonresponse error. Meaning the people who choose to answer your survey have different attitudes, opinions and behaviours than those who don’t reply. Here is a link to the article I wrote on this type of bias:

          http://fluidsurveys.com/university/how-to-avoid-nonresponse-error/

          Hope this helps!

  • Nida Madiha says:

    Hi Rick!
    My name is Nida, and currently I am conducting a research for my bachelor degree. I found your page is very helpful for my research. Anyhow, I have two questions about the number of population within my research.

    My questions are;
    1) what if a number of population is changing at any time?
    2) is it better to go with the first number of population at the first time i have done my observation at my research site?

    Thank you in advance. Looking forward to your response!

    Nida.

    • RickPenwarden says:

      Hi Nida,
      Need help with your homework? No problem!
      Your question is interesting, and since I don’t know the particulars to your study I can only give a blanket answer. Your population is defined by the number of potential respondents in your target group. Whenever you are collecting your responses, count that as your population.
      So let’s say I conducted a staff survey in 2012 and had a population of 65 people, but in 2013 when the report came out our population was 85. What do I use in my calculations? Well, the population in the research equation would remain 65, with the caveat of the date the study was taken. Remember the extra 20 staff members never had a chance to be in the study and therefore were not potential respondents in your target group.
      The important thing for the you to do is identify in your presentation and reports when the data was collected.
      Hop this helps!

      • Nida Madiha says:

        Thanks a lot for the fast answer. I have one question again though. in what occasion should we use a particular number of confidence level? somehow, i am thinking to go with 95% of confidence level. is it because 95% is the most used or any other reason?

        ps: to let you know, I read “Research Design Explained” by Mtchel & Jolley (2013), they use 95% confidence level of Amburg’s table.

        • RickPenwarden says:

          Hi Nida,
          95% is an industry standard in most research studies. Many science experiments use 99% confidence because they want to be more sure of their results. If you have no specific reason not to, use 95% and allow your margin of error to fluctuate based on your sample size.

  • Wisdom says:

    Hi Rick,
    My name is Wisdom
    My population is 45. Is it not advisable to use the entire population as the sample size since the population is very small?

    • RickPenwarden says:

      Hi Wisdom,
      The more of your population that respond to your survey the more confident you can be in your findings. If the entire population responds to your survey, you have a census survey. This means that you are 100% certainty that the information you collected is representative of your population. So it is actually best to survey all.
      The smaller your population the larger portion of respondents you’ll need to reach your desired confidence level. This is due to the Finite Population Correction formula.
      The only reason not to use your entire population in your sample size would be due to your own lack of resources or inability to reach potential respondents.
      Hope this helps!

  • Ann says:

    Hi Rick,
    Am Ann. I would like to know how to calculate sample size using confidence level and a set margin of error. I know the population is approximately 400

  • LUCY says:

    hello Rick….
    how can I do if i want to use a scientific calculator to get a sample size? I fail how to put the figures

    • RickPenwarden says:

      Hi LUCY!

      It’s always great to check your work and not just blindly trust a survey sample size calculator you find on the internet. But how do you carry out the calculation on your own? Well that is what the formulas in this blog are for:

      Sample Size Calculation:
      Sample Size = (Distribution of 50%) / ((Margin of Error% / Confidence Level Score)Squared)

      Finite Population Correction:
      True Sample = (Sample Size X Population) / (Sample Size + Population – 1)

      So first carry out the sample size calculation and then use that number in the finite population correction. Remember that the margin of error and distribution percentages take the form of decimals when you plug it into the formula (50% = 0.5 and 5% = 0.05). As for the confidence level score, this boils down to the standard deviation value that corresponds with your desired confidence level (95% confidence level = 1.96).

      Finally, you are almost guaranteed to get a long string of decimal places on your resulting number. Just round this up to the closest whole number!

      Hope this helps!

  • Dragan Kljujic says:

    Hi Rick,

    Should the formula for calculation of the sample size include the expected response rate? If I am not wrong, an existing formula implies 100% response rate! What if my expected response rate is 10%? I have population N=33500 and my calculated sample size is 380 (confidence level of 95% with a margin of error of 5%). If my expected response rate is 10% should I sent an email invitation to 3800 persons to make sure that I will have 380 responses? Another question is about randomness of my sample. I can randomly chose the 3800 potential participants but my sample still will not be random duo to the non-response bias. Is there any way to make sure that sample is really random?

    • RickPenwarden says:

      Hi Dragan Kljujic!

      Wow this is a two parter:

      1) You’re right! The sample size calculated refers to the number of completed responses you need to reach your desired confidence level and margin of error. So this does not include any nonresponses. You’ll need to ensure you receive 380 completed responses to reach your probability goal, which may mean like you said, sending 3800 survey invited to achieve this.

      2) Having a list of contactable potential respondents puts you at a major advantage to having a random sample. Like you said, you can randomly select your 3800 survey recipients to remain a probability sample or you can send a survey to every single person in your population (it may be more expensive, but you will gather more data and give everyone an equal chance to participate).

      Unfortunately, non-response bias is a source of systematic error that is almost impossible to 100% satisfy. But there are some tricks to limit its affect on your results. Here’s an important one:

      -Send your survey invite and reminder email at different times and days of the week. Chances are those who missed the first email, will miss the reminder if it is sent at the same time in the week. I go into it in more detail in this article .

      For more tips on combating nonresponse error, check out this blog I created a while ago:

      Also, many researchers attempt to curb the affects of nonresponse bias by using weighting, but this will call into question the ability to call your study probability based. Unfortunately the only way to eliminate nonresponse bias completely would be to have a 100% response rate.

      Hope this helps!

    • RickPenwarden says:

      Hey!

      Just realized my links are broken! Here they are again:

      First -Sending survey email invites at the right time:
      http://fluidsurveys.com/university/its-all-about-timing-when-to-send-your-survey-email-invites/

      Second -How to avoid nonresponse error:
      http://fluidsurveys.com/university/how-to-avoid-nonresponse-error/

  • Παναγιώτης Σοφιανόπουλος says:

    Hello Rick, I’m Panos.
    What happens if our population is not humans and it is an object?
    I mean if if have a total of 1000 balls in a box and the 900 of them are white and 100 of them are black (black are the 10%) and want to randomly pick the least (the smaller possible) sample but this sample to have the characteristics of the Total (90% of ball white), what must I done with the confindence level and margin of error?
    Must I put low condidence level and high margin of error?
    Thanks

    • RickPenwarden says:

      Hello Panos!

      From a probability perspective, whether sampling
      objects or humans, there is no difference in sampling technique. Your problem
      of having two distinct groups in your sample (white and black balls) is akin to
      a survey sampling issue where you want to ensure each demographic is properly
      represented. For example, you know half your population is female and half is
      male, so you want to ensure your sample though smaller than the population will
      also hold this 50/50 characteristic.

      The short answer to your question is that your
      confidence levels and margin of error should not change based on descriptive
      differences within your sample and population. So with a confidence level of
      95% a margin of error of 5% and a population of 1000 balls, you would come to a
      desired sample size of 278 balls.

      Now here is the tricky part, for your sample size
      to properly mirror the population, 10% of the sample or 27.8 (let’s round to
      28) of the balls should be black. Researchers use several different techniques
      to give all groups proper respresentation in a sample.

      -First is quota sampling, only allowing the first
      250 white balls and first 28 black balls into the sample and then tossing any
      extras selected out of the study.

      -Second, conducting two separate surveys, or in
      this case putting the 900 white balls in a different bag than the 100 black
      balls. If you really care about comparing the difference between both balls,
      you’ll have a random sample of each and can compare their differences. Unfortunately, if you take this
      approach you will have difficulty measuring anything but their differences.

      -Third, conduct the selection completely randomly,
      the larger your sample size the more likely your sample will be representative
      of the population. However, if there are any discrepancies, you can grant more
      or less weight to the groups that are over or under represented.

      The important thing to remember: If you are using
      quotas or weighting, your survey’s probability can be called into question.
      This is due to the fact that quotas limit the equal chance of all potential
      balls being selected and weighting overvalues and undervalues individual balls
      with the assumption that a descriptor preselected by the researcher (in this
      case colour) has a significant effect on what is being studied.

      Something you may want to look into is nonresponse
      error. This describes the affect created by the difference between a
      sample group’s make up and its target population’s make up. Researchers have
      several tricks to counter act some of the effects of the bias during their data
      collection process but are still sometimes forced to rely on weighting and
      other statistical techniques on the back end to combat nonresponse error.
      Here’s an article I wrote on it to get you started:

      http://fluidsurveys.com/university/how-to-avoid-nonresponse-error/

      Hope this all helps!

Leave a Reply

Your email address will not be published. Required fields are marked *