360 Degree Feedback Learning Centre - Every resource you could ever need to implement 360 feedback successfully

Ultimate Guide: Developing 360 Feedback Survey Questions

Great 360 degree feedback questions are at the heart of every successful 360 feedback survey. Poorly written questions make it difficult, even impossible, for respondents to provide meaningful and reliable feedback and for participants to make sense of what they receive.

This article draws together over 20 years of practical 360 questionnaire design experience and is ideal for people who want to take their 360 question design knowledge to the highest level.

fully managed 360 survey

Writing quantitative 360 degree feedback survey questions

When used with some form of frequency or behavioural scale, quantitative questions (used to provide numerical/quantifiable outputs) provide a great start point for capturing and presenting perceived strengths and development opportunities.

The challenge is how to write high quality questions that are interpreted/understood the same by everyone completing them and provide useful feedback results. For 360 feedback questions to be effective, they need to meet the following criteria:

  • Are valid... i.e. measure the right thing - In the 360 feedback context, validity means that questions need to:
    • Provide feedback on those leadership behaviours/skills that are important for leading within the organisational context. In practical terms, questions need to have ‘direct line of sight’ to the organisations competency/behavioural framework/leadership expectations/values etc.
    • Be relevant and pitched to the participant's role, seniority, circumstances and level of accountability within the organisation.
  • Be reliable... i.e. create consistency and dependability – The challenge here is to reduce possible errors in how each question is interpreted - understood. Questions that are too vague, ambiguous, confusing etc., will be interpreted (and therefore rated) differently/ inconsistently, which in-turn will provide unreliable/woolly results.

Most reliability errors can be avoided by applying the following six principles:

    • Ask about observable behaviours only - You shouldn’t ask questions about someone’s understanding, feelings, beliefs, ability, internal thought processes etc as these are not easily displayed/ observed. If it can’t be seen, it can’t be rated!
    • So avoid questions like:

      • Understands own strengths and weaknesses
      • Reflects on own performance
      • Is able to resolve conflict appropriately
      • Learns from mistakes/failures
      • Strives to improve own performance
    • Describe one behaviour at a time - Avoid asking double or triple-barrelled questions make it difficult for respondents to rate (particularly where the participant is strong in one aspect but not the other) and produces unreliable and difficult to interpret results.
    • Don't include questions like:

      • Invites feedback and uses it to improve
      • Effectively chairs meetings and follows up on actions
      • Sets stretching goals for self and others
      • Builds and maintains excellent relationships with internal/external customers
    • Keep questions short and to the point – Questions should be easily read/understood. Too many words can confuse the meaning and therefore you should aim to keep them short, sharp and clear. Aim for between 2 and 6 words, 10 maximum.
    • Use simple, plain language - Make sure your questions are clear and mean what you intend them to mean - Avoid abbreviations, acronyms, jargon, management or corporate speak, technical terminology or overly complex language unless you can be sure that everyone who answers the question will understand it in the same way. Simple, clear language is always best!
    • Each question should describe a positive behaviour – Avoid mixing positive and negative statements as this again impacts their interoperability against the scaling used.
    • Each question should stand on its own - Don’t write questions that rely on neighbouring questions for context or additional meaning.
    • Other things that will impact the reliability of questions include:

    • If the question is overly subjective – Questions like:
      • Is an expert in their field
      • Is an inspirational leader
      • Is respected as a skilled and knowledgeable person in their area of responsibility
    • If they are too specific - Behaviours that would not be seen by the majority of people will be in conflict with a frequency scale, which brings interpolation inconsistencies into how they are rated – So, try and avoid questions like:
      • Does XYZ process well
      • Documents and records performance issues in line with organisational policy
    • The participant’s opportunity to carry out the behaviour - Behaviours that the participant has limited/occasional opportunity to conduct will be in direct conflict with frequency type scales – Things like:
      • Conducts effective annual appraisals
    • Behaviours that are conducted outside the normal working environment – Because raters wont have an opportunity/are unlikely to see the behaviour, responses to the questions will be unreliable. Avoid questions like:
      • Makes a valuable contribution to external client meetings
    • Questions that are in conflict with the rating scale – Some examples:
    • Avoid Why
      • Is able to...
      Asks about the persons ‘ability’ not an observable behaviour
      • Holds regular 1:2:1’s
      The word 'regular' clashes with a frequency scale … How often/frequently does this person...
      • Is seen as the ‘go to’ person
      Would be better suited to a ‘yes’ – ‘no’ answer
  • In summary, to ensure your quantitative 360 feedback survey questions have both ‘face validly’ and ‘reliability’, you should ensure that they:
    • Have a clear link to organisational expectations
    • Pitched at the participants leadership level/role
    • Are clear, unambiguous, simple and to the point

Using qualitative questions within a 360 feedback survey

360 feedback participants frequently site the ‘free text’ comments in their report as being the most insightful, particularly when they are used:

  • At the end of a competency/dimension/topic area to provide further information in support of their ratings
  • At the end of the survey as a ‘catch all’ or to answer a specific question

Great free text questions typically share the following characteristics:

  • They allow for balanced (strength and development opportunity) comments to be gathered
  • They provide specific focus whilst allowing a broad range of responses

Free text questions at the end of a competency/dimension/topic areas

The aim here is seek/collect information that will help qualify the quantitative results and offer insights into potential next steps/ideas of action.

A good example would be... Please provide some supporting information regarding your responses above and any ideas for next steps.

Free text questions at the end of a the questionnaire

Open ‘catch all’ questions at the end of the questionnaire is a great way of encouraging respondents to provide a balanced summary of their feedback or give feedback on something that wasn’t covered within the topic areas/ question set.

The most common approaches are:

  • Stop, Start and Continue (or More of, Less of, and Just right)
    • What should {Name} continue doing?
    • What should {Name} stop doing?
    • What should {Name} participant start doing?
    • Top tip – We’ve found that asking for positive observations (‘continue’ or ‘more of’), first, sets a constructive tone, eases respondents into the activity and is more likely to get a response.

  • Strengths and opportunities
    • {Name’s} key strengths are... (or The things {Name} does really well are...)
    • {Name} would be even better if they... (or The things that {Name} should develop further are...
Close