Peakon’s standard question library includes:

  • 1 overall engagement question
  • 14 core driver questions
  • 31 sub-driver questions
  • 6 open-ended questions

The 46 driver questions fit into 14 drivers that represent different elements of organisational psychology. For example, the Autonomy driver has two sub-drivers: Flexibility and Remote Work.

The way Peakon samples employee feedback is designed to give you a steady stream of data for all these drivers, on an ongoing basis.

Why cover all drivers at once?

We’re sometimes asked this question, as opposed to only covering certain drivers in each survey, or having themes for each survey, e.g. immediate manager support, company strategy, etc. There are two main reasons to avoid only covering some drivers:

  • By not covering all drivers as quickly as possible you risk missing the biggest issues influencing the engagement of your employees. Given that this is essentially the reason for gathering employee feedback it can greatly devalue the process. When all drivers are covered, Peakon will show you the themes that are most important in a more objective way.
  • Covering all drivers will also enable you to view trends faster. If you were only to cover a quarter of the drivers over each business quarter for example, it would then take years (rather than months) to understand if you’re moving in the right direction.

The question rotation algorithm is therefore designed to gather feedback on every aspects of your company’s culture, and present trends to you, as quickly as possible. The real beauty of it, is that to get these results, you do not need to ask all questions to all employees at the same time – enabling you to keep surveys short, and ensure high-quality feedback by avoiding survey fatigue.

How are questions distributed?

When using the survey frequency, it's possible to set a frequency for the main engagement question, as well as the driver and sub-driver questions.  The algorithm will then distribute different questions between employees. With very short surveys, such as the four questions used in the weekly survey mode, each team will receive statistically significant scores for all drivers and sub-drivers over the course of the first few weeks.    

In more technical detail, for each employee, the question rotation algorithm picks questions based on weighted probabilities among the question set. The weights are inversely related to the time since the question was last answered by an employee – the longer it was, the higher the probability of picking that specific question.

The order generally dictates that the main engagement question comes first, followed by the question groups starting with driver questions, then values questions and, finally, open-ended questions. The questions within these groups are randomised. 

There are, however, instances when Peakon’s algorithm will proactively change the order of questions, in response to a concerning response from an employee. Should an employee give an extreme score to a driver question, in comparison to Peakon’s industry benchmark for the question (for example, answering the Autonomy question with a 2 when the benchmark is 7.6) then Peakon will immediately follow-up with a sub-driver question to learn more – all within the same survey. This means as a manager you’ll not have to wait to learn more, should a potential issue arise.

Manual frequency

Choosing a  manual survey frequency will trigger all driver and sub-driver questions to be asked. In this instance, it's not possible to set the engagement, driver, and sub-driver frequency. The order of questions respects the general question rotation:

  1. Main engagement question
  2. Driver questions
  3. Values questions
  4. Open-ended questions

The individual questions within these groups are randomised.

Sample sizes and statistical significance

Driver questions are distributed in survey rounds by random sampling to ensure that the results shown in the dashboard are unbiased, and that we can use the statistical properties of the scores we observe to assess when segments are significantly different from the benchmark. We only assign priorities and strengths once they pass a statistical test that the results are significant.

Naturally, the more responses, the more accurate the results will be. However only a relatively small number of results are required to obtain high quality estimates, and the accuracy and stability of segment scores quickly stabilises as further rounds are completed.

In extremely small segments, close to the anonymity limit, we would advise waiting until a majority of employees have answered before conducting any comparisons. In larger segments this restriction can be relaxed, and in very large segments, even a small proportion of responses can give a highly accurate result.

There is no issue of statistical validity with comments or topics, which can give instant insight into ongoing issues. 

Article: Question library and theory references
Article: Choosing the right survey frequency for your organisation
Article: Key drivers of engagement: priorities and strengths
Article: Data anonymity settings

Did this answer your question?