We’re often tempted by numbers, no, seem impartial, objective, and allow us to easily compare ourselves with others. We like to say that 80 percent of our users can successfully check out that the average satisfaction rating at the end of a task on our website is six point seven out of seven, and that our site’s net promoter score was 70 in the last study that we run. Because of that, often we tend to report numbers from small studies with a few, say, five to 10 participants. But are these numbers true? Do they paint an accurate picture? To answer that, we need to look beyond the actual numbers. Why? Well, in any data collection, there is noise noise that comes from the fact that each of our participants is an individual with their personal context of knowledge, experience and life. One person may be sick on the day of the study and may suffer through it. Their state will color their rating of our website, even though they may strive to be impartial. Another one may have just gotten a big promotion at work and may see the world and our design through rose tinted glasses. When you include a large number of participants in your study, it’s likely that these sources of noise woke us up for almost every great tended pair of glasses. You’ll have another rose colored one. But when the sample is small, it’s unlikely that will happen. So even though you may be tempted to report the average satisfaction or the net promoter score obtained from a small qualitative study with only five users, remember that these numbers are unlikely to carry any truth. But even for a larger study, there is no guarantee that the noise will not overcome the signal.
The only way to find out is to run a statistical test on the numbers to determine their margin of error and their statistical significance.
That will tell you how far away those numbers are from the truth and whether you can trust them.