As a PhD student in Psychology I often face the question: How many participants shall I recruit? The standard way of determining the required sample size for a study is to run a power analysis (usually done with G*Power; R lovers use the pwr package). Calculating the required sample size involves knowing the effect size of the effect you're investigating. However, it is often difficult to estimate the true effect size, especially if the research is cutting-edge and there's no previous work giving you a clue. Therefore, I felt it was reasonable to follow this rule of thumb: the higher the sample size, the better.
However, more recently I've overheard fellow academics say that one can *oversample* a study. Oversample a study? Does that mean that it is possible to recruit too many participants? My statistical intuition tells me that there can never be enough participants/data points: the higher the n, the higher the likelihood of detecting the true size of the effect. Being somewhat puzzled, I've decided to tackle this issue head on by asking some experts via Twitter.
In less than 20mins, I got a pretty clear-cut answer by Joe Simmons, a renowned methods expert from the Wharton School, University of Pennsylvania:
To summarize what we know so far: In theory (and when ethically justifiable), one can never have enough participants.
But... practical + financial concerns pose a limit to the number of participants you can realistically recruit. In other words, you've got to be strategic when allocating resources, as Northwestern University's Eli Finkel pointed out:
Last –but not least– Daniel Lakens from Eindhoven University of Technology gave some really practical advice. Read in his blog post below about 6 straightforward, easily implementable steps which will help you to design your studies (just click on the link).
Ah, and always remember: We are here for you. The next time you ask yourself: "Where shall I recruit all these participants that my power analysis is dictating?", we'll be here, lending you a hand.