Driver questions are distributed in survey rounds by random sampling. This ensures that the results shown in the dashboard are unbiased, and that we can use the statistical properties of the scores we observe to assess when segments are significantly different from the benchmark. We only assign priorities and strengths once they pass a statistical test that the results are significant.

Naturally, the more responses, the more accurate the results will be. However only a relatively small number of results are required to obtain high quality estimates, and the accuracy and stability of segment scores quickly stabilises as further rounds are completed.

In extremely small segments, close to the anonymity limit, we would advise waiting until a majority of employees have answered before conducting any comparisons. In larger segments this restriction can be relaxed, and in very large segments, even a small proportion of responses can give a highly accurate result.

There is no issue of statistical validity with comments or topics, which can give instant insight into ongoing issues. 

Did this answer your question?