How many completions of a survey is enough?

Helen Taylor

I often get asked this question by clients when we run a survey as part of a piece of user research. 

My response is often: ‘it depends’. 

But there are usually 2 thoughts on my mind. Firstly, at what point will I stop learning new things when additional people complete the survey? And secondly, how many completions will I need to be able to make meaningful comparisons within the dataset?

With a survey made up of quantitative questions, my aim is to get the number of completions (or the ‘n’ number) to a point which means any more completions won’t make a difference to the findings of the survey. 

After that, keeping a survey open is a drain on my time, and wastes the time of the people who complete the survey after this point. 

For the kinds of quantitative surveys that we use, 50-100 completions  is usually enough to feel confident in the dataset and findings.  

But 50-100 may not be enough if I want to make comparisons within the data.

For example, for a recent quantitative survey for Health Education England we wanted to validate the needs of ‘expert searchers’ in the NHS. We had an n=169 when we felt confident enough to close the survey, because that enabled us to compare the validated level of need between 2 distinct user roles: librarians (n=101) and healthcare practitioners (n=52).

A few of our recent quantitive and qualitative surveys for clients such as Health Education England and Skills for Care.

If a survey is asking qualitative questions then it’s about getting to a point where there are no new themes emerging. This is easy to keep an eye on.  

I tend to review a survey at around 30-50 completions to assess whether we’re still seeing new themes and findings. If we are, then I keep it open, keep promoting and keep re-assessing.

Of course, there are lots of other factors to consider, including the size and engagement of the target group, the channels available to promote a survey, and the time available to run it and then analyse the results. 

But I know that I’ll feel more confident when I’ve satisfied myself that we have stopped learning new things, and that we have enough completions to make meaningful comparisons. Although I’ll still stick with my initial response of ‘it depends’, when asked.

Related Case Studies

User research about expert NHS searchers for Health Education England

A project to understand the needs of expert information searchers in the NHS

User research about national library services for Health Education England

Our research helped HEE consider options to better support NHS staff to find and use information.

More from the Author

Helen Taylor

Operations and delivery lead

Helen Taylor 10/05/2024

Why you should involve clients in research planning from day one (if you can)

We put a lot of effort into research planning. But…

Helen Taylor 27/09/2023

It’s great to see the NHS England knowledge team making good use of Lagom research (and working in the open)

I really like it when our clients want to work…

Helen Taylor 16/07/2022

Overcoming the challenges of working with government teams new to Agile

I’ve been a delivery manager and Scrum Master on government Agile Discovery and Alpha digital projects for 8 years.