360 Degree Feedback Learning Centre - Every resource you could ever need to implement 360 feedback successfully

360 Feedback Questionnaire Design

360 feedback questionnaire design is a key component of all successful 360 feedback processes. These notes set out to provide grounded advice and best practice guidance on how to develop the best 360 feedback questionnaire template.

fully managed 360 survey

Underpinning Principles

The starting point for any 360 feedback questionnaire is to ensure that it is anchored to a clear purpose, will measure the right things and is future focused. The initial questions to be answered are:

  • What is the purpose of our 360 feedback survey? - Start your 360 questionnaire design process with a clear picture of what you are trying to achieve: is the intention to measure performance; change the management culture; embed values; manage/identify talent; provide developmental insight; align managerial behaviours; conduct a Training Needs Analysis etc.
  • What should it measure/provide feedback on? - Taking time to design a questionnaire around the specific needs of the organisation is time and effort well spent. Questions that measure a person's ability to lead people in a manufacturing environment will be different from those used to measure a similar role in a charity that supports people with learning disabilities. Whilst this sound obvious, our experience is that it is not the instinctive approach taken by most! To make a real difference 360 feedback questionnaires need to be fit for purpose. In real terms this means they need to be directly linked to the organisation's expectations of how managers are expected to manage and lead others.
  • Measure the behaviours that will build a more successful future - One of the key levers of 360 feedback is its ability to raise people's awareness to the behaviours described in the questionnaire. Real 'organisational wins' come when questions capture both the leadership expectations now and those desired behaviours needed to overcome future challenges and underpin the delivery of the organisation's longer term aspirations

360 feedback questions

The most effective questionnaires incorporate both quantitative (tick box) and qualitative (free text) questions:

  • Quantitative (tick box) questions - Good quality quantitative questions are ideal for providing clear 'hard data' against the questions/behaviours described. The main goal is to ensure quantitative questions are of a high quality i.e. they have validity and reliability. Our article How to write the best 360 feedback survey questions provides best practice advice on writing these questions.
  • Qualitative (free text) questions – 360 degree feedback participants commonly cite the comments on their reports as being the most valuable part of the feedback process. They are very good at:
    • During the survey - Providing insights into specific topic areas
    • At the end of the questionnaire – To collect summative feedback, providing respondents with the opportunity to share other behaviours/items that may have be missed if quantitative data alone were used.

Our guide How to write the best 360 feedback survey questions provides best practice in the use of 360 feedback qualitative questions.

360 questionnaire design principles

The following provides an overarching framework for good 360 questionnaire design:

  • Number of questions (quality vs quantity) - There is always a temptation when designing 360 questionnaires to ask lots of questions simply because you can. Rather than going for quantity, focus on using a smaller number of high quality questions that describe the core behaviours you are trying to reinforce or embed
  • Group similar questions together - Clustering similar questions together and presenting them in topic areas (such as Leading Others, Communication Skills, Relationships etc.) ensures the questions have ‘context’ which helps with question reliability and respondent focus
  • Introduce each topic area – A brief paragraph at the start of each topic area helps with contextualising the questions which makes user input easier and improves the quality of their responses
  • Allow for 'can't answer' responses - Sometimes not all questions asked are relevant to all participants, or perhaps the person providing the feedback has not had an opportunity to see them carrying out the behaviour described. Allowing a 'I don't know', ‘I’ve not had an opportunity to view this’ or 'can't answer' response prevents the feedback respondent feeling trapped, forced to make a guess or give a response that doesn't reflect their true perception

360 questionnaire design – Completion time considerations

Getting the length of your 360 feedback questionnaire right is important. Too long and you risk overburdening respondents (which results in lower value feedback). Too short and you may only collect in half of the picture!

Whilst there are no hard and fast rules on how long your 360 feedback questionnaire should be. Some of the things worth taking into account when considering how many questions to include are:

  • Respondent fatigue - The longer the survey the more likely the quality of responses will deteriorate, negatively impacting the respondent’s contribution. Research also shows that on longer survey’s respondents tend to answer each question much quicker than on a shorter ones. So, the longer the survey, respondents get fatigued, pay less attention to the job in hand and their speed of completion goes up. Fatigue and satisficing (focus on getting the job done quickly as opposed to quality) increases significantly after 15-20 minutes.
  • Respondent motivation – The more motivated the respondent is to complete a questionnaire, the longer the survey can potentially be (and vice versa!) Taking time to engage respondents by sharing the ‘why’, its benefits and encouraging participants to personally ask for their respondents support will provide high quality results and allow for a few additional questions to be included.
  • Attention span – Because of the number of variables (internal motivation, task complexity, state of mind, age, external distractions, ability to ‘re-focus’ etc.) it would be impossible to provide an absolute average attention span time. However, most research indicates that circa 20minutes is at the top end of our sustained attention scale.
  • Cost of time - The average cost of labour within the UK (2022) was circa £25 per hour (£45 in the financial and insurance sectors and as low as £12.50 per hour in some sectors). Therefore assuming each 360 feedback survey has 10 contributors and takes about 15 - 20 minutes to complete, the cost would be approximately:
    • Higher paid sectors - £150 per participant
    • Average - £83 per participant
    • Lower labour cost sectors - £41 per participant
  • Time taken to complete 360 questionnaires – We accurately tracked (by automatically logging the time each question was answered) 1,937 respondents as they completed their 360 feedback questionnaires. The questionnaire contained 50 quantitative question spread over 5 competency/ topic areas with a qualitative question at the end of each topic area and 3 additional qualitative questions (stop, start and continue) at the end of the questionnaire. The following summarises our findings:
  • 360 feedback quantitative questions – completion times - The average completion times were:
    • 30 questions = 9 minutes
    • 40 questions = 10 minutes
    • 50 questions = 12 mins
  • Free text (qualitative) questions – The following provides a breakdown (by percentage and number) of the how many people completed each of the various free text questions:
  •   Qualitative/free text questions at the end of each topic area Stop, Start and Continue Number of respondents
    Q1 Q2 Q3 Q4 Q5

    Responded to every question

    28% (537)

    No questions completed

    28% (533)

    Only completed the Stop, Start and Continue (SSC)

    17% (325)

    Four random topic areas plus SSC

    ✅ ✅ ✅ ✅ 12% (240)

    Three random topic areas plus SSC

    ✅ ✅ ✅ 6% (115)

    Two random topic areas plus SSC

    ✅ ✅ 5% (97)

    One random topic area plus SSC

    5% (90)

    One random topic area plus SSC

    4% (70)

We also tracked the time it took each person to answer each of the free text (qualitative) questions. The average completion time was 3 mins and there was no significant difference in the completion time for each of the different questions.

Time considerations – The takeaways... As a rule of thumb, we should be aiming to ensure our 360 questionnaires can be completed within circa 20 minutes (and take active steps to ensure (wherever possible) respondents are motivated to complete them.

To achieve an average 20 minute completion rate, we would suggest a best practice 360 questionnaire would be made up of:

  • 40 questions splits into 3 - 4 competency/ topic areas with a free text box at the end of each
  • 2 x summative free text questions (strengths and opportunities) at the end

360 feedback rating scale types

The rating scale used should be driven from the overall purpose of the questionnaire and in particular whether it is being used as part of an appraisal/performance evaluation process or to provide developmental feedback/insight. 360 degree feedback rating scales can be split into three main categories:

  • Judgemental - The feedback respondent is asked to grade, appraise, score or pass judgement on the participant. These scales tend to be used when the questionnaire is linked to performance measurement. Examples of these include:
    • Competence/Development Scale - How would you rate this person's competence/ development in this area? (Outstanding strength - Needs significant development)
    • Comparison Scale - Compared to other managers within the organisation, this person is... (Far above average - Far below average)
    • Performance Scale - How would you rate their performance in this area? (Exceeded expectations - Did not meet expectations)
    • Satisfaction Scale - How satisfied are you with this persons performance of… (Very satisfied – very unsatisfied
  • Observation - Frequency scales that ask the feedback respondent to share how often/frequently/to what extent they see the participant carrying out the behaviour described are most commonly used where the core motive of the questionnaire is to provide insight and developmental feedback. In this instance the data is used to raise the participant's awareness of how they are seen/perceived by others. Some typical approaches include:
  • Almost always Very large extent 100% of the time
    Frequently Large extent 75% of the time
    Sometimes Moderate extent 50% of the time
    Rarely Small extent 25% of the time
    Almost never Very small extent 0% of the time
  • Agreement (strongly agree to strongly disagree) – Used to measure attitude, this approach asks respondents to express how much they agree or disagree with a particular statement. As such they work very well in a psychometric questionnaire or organisational survey where the goal is to understand how someone ‘feels’ about something. But (in our opinion) do not work in the 360 feedback context.

360 feedback rating scale points

There has been a huge amount of research done in the search for the ‘optimal’ number of points on a scale and how they should be presented. The following connects generic best practice with our experience of developing 360 feedback questionnaires.

  • 3 point scales – Whilst three-point scales may be perceived as faster and easier to respond to, they actually:
    • Reduce/restrict the respondents ability to express their view
    • Produce un-reliable/inconsistent results due to forcing respondents to choose between too few categories
    • Provide bland results (from a lack of intensity and validity)
  • 5 – 7 points - Increasing the number of points on a scale, increases its reliability, with the biggest gains taking place between three and five points. The challenge is to select the number of points that enables clear distinctions to be made between each point and in a consistent way without providing so many points that the measure becomes too busy and unreliable. In practical terms we’ve found 5 points of difference (on a frequency scale) to be the sweet spot.
  • The use of labels on the scale - As with all other aspects of questionnaire design, we are trying to achieve the highest levels of reliability possible. Adding descriptors/response labels to our scale can increase consistency of understanding which equates to more reliable outputs.
  • Equal points of separation – The goal is to ensure that each incremental step on the scale is of the same size as the others. A bad example of this would be:
  • Finally, for ease of respondent completion, it is good practice to use only one scale within the questionnaire.

Pulling it all together – The best 360 questionnaire feedback template

In summary, the best 360 feedback questionnaires:

  • Have between 40 – 50 high quality quantitative questions split into 3 - 4 topic areas
  • Include some introductory text at the start of each topic area to help set the context
  • Include a free text box at the end of each topic area and a ‘catch all’ at the end
  • Use a frequency type scale with between 5 – 7 points on it, each with a label that is clearly differentiated from those either side of it

Test and adjust

Don’t miss out this important step. It could save you the inconvenience and embarrassment of sending questionnaires containing questions that people don’t understand, can’t answer or deliver unhelpful results.

Select a number of people who have not been involved in the design process and have never seen the questions before. Jumble the questions up to make sure they are not relying on neighbouring questions for context. Then ask people:

  • What they understand the questions to mean?
  • Do they find the questions easy to answer?

Finally you should run a small pilot of your 360 tool to exercise the whole process and see if it performs in the way you want it to.

Close