Maximizing the Value from 360 Feedback: A Guide for Leaders
Maximizing the Value from 360 Feedback: A Guide for Leaders and Process Owners
By: Suzanne Miklos and Mary Steuber
We believe that much of the value of 360 feedback processes evaporates due to small holes in the implementation. Our goal is to point out the practices that ensure all of the value that can be gained is actually realized. Both organizational and individual level benefits can be realized by mining the data and using supportive development tools.
A 360 degree feedback process gives an organization a data driven perspective of leadership strengths as well as talent gaps. The results should be leveraged by the talent development function to evaluate bench strength and examine patterns of strengths and opportunities. Those opportunities can then be aligned to important capabilities needed to implement strategic initiatives. For example, an organization that is focused on increasing accountability in the leadership culture to support quality improvements can determine to what degree leaders have the skills and behaviors required to advance the culture. Taking the raw data and studying the relationship of 360 to employee engagement surveys, turnover or even performance metrics for organization units can be illuminating when prioritizing organizational development efforts or refining the tools and process.
When using 360 data, we focus on organizational capabilities that are aligned with strategy. Many organizations suffer from trying to employ a “perfect leader” model that is impossible to achieve. In our example, a hospital working towards more effective execution will have a development strategy to increase leadership accountability. In this case, the 360 data showed a weakness in leaders’ abilities to coach employees to take personal ownership. A development focus around coaching was implemented and progress was measured with a follow-up mini survey. A subset of items from the full 360 was selected to check for progress on coaching related skill sets. While progress was demonstrated, areas to continue to build skills were identified. The best practice is to have an average of five questions to measure each competency and to analyze the item consistency and scale validity over time (Nowack, 2008). Normative data for the organization or for similar positions can be quite useful in interpreting the data.
The purpose of the 360 can influence the validity of the results. In research that we conducted in a large healthcare organization, there were significantly elevated scores on all competencies when ratings were used for decision making and compensation as opposed for strictly development. The rank order of the competencies was similar regardless of purpose. Raters are more likely to provide accurate ratings when it is clearly communicated that the purpose is for organization and leader development.
At an individual level, the power of a 360 comes from the enhanced self-awareness that is critical to personal leadership development. This awareness comes from a combination of the results from the tool and the feedback process. The leader’s stakeholder, direct report, peer and management ratings of the leadership competencies are compared against his/her self ratings to identify unrecognized strengths and blind spots. Self-awareness is even further increased upon an examination of the written comments that may not have even been brought to the leader’s attention had they not undergone a 360 assessment. The ability to see one’s intentions and self perceptions mapped to others’ perceptions is like a three-way mirror in a brightly lit dressing room.
There is a risk with leaders that they ignore strengths, become defensive or fixate on a single comment or low scores without consideration of their role and their career plan. This can lead to inertia, if not active rejection of the tool and process. A good coach ensures that the leader takes a balanced and objective view of the data and process it in a healthy, constructive manner. For this reason, the feedback session is of critical importance. Criticism without guidance and explanation can often be more harmful than helpful. It is up to the coach to see the leader through the feedback process and explain paradoxical information to him/her so that the process creates emotional safety and a holistic view. The specific role of a coach in the feedback process is explained in detail later; however, feedback session coaches can be internal or external and do not necessarily need an HR or OD background.
An organization faces risks to the ROI of the results when the implementation is haphazard or inconsistent. Many organizations make the mistake of having sound front-end implementation but not strong follow through. Without properly planned implementation and follow up, participants may not set developmental goals or exhibit behavioral change.
Setting the stage with the executive leadership in the organization is a critical step in implementation. The executive leaders need to see the importance of their role in this process, the purpose of the 360 and how the results will be used to develop the bench strength of the organization to meet the strategic needs. Best practice is that the individual results are used for development and shared with the participant and their coach. Thought and discussion around organizational readiness is imperative. There needs to be a strong sense of safety for both the participants and their raters. There are always executives who have had a bad experience with a prior 360, so it is essential to communicate the purpose of the process and how the results will be valuable. Depending on the trust level in the organization, it may take 3-4 administrations before the organization settles into acceptance of the tool. Part of this comes from visible and effective development based on the tool. The tool has to be viewed as a credible way to improve leadership skills and even to get promoted. Working with an external vendor or consultant can also be very helpful for the first few years of a new system.
Another key is to provide training for participants. Participant training should include purpose and clarity that the 360 is for self-awareness and development. Having a holistic leadership model and development programming puts the 360 into a bigger picture for the participants. What is valuable to the organization and what will be the win for the participant should be communicated. The rating scales, including frequency of behavior and how the behavioral items support key leadership outcomes, are key pieces of training. We have found that giving concrete examples for each point on the rating scale is helpful. Preparing the participants to understand the difference in perception and intent helps people diffuse resistance in advance. Participants need to understand the value and importance of the self-rating so that he/she can get the most out of the assessment. It is important they answer these self-assessments honestly so they can truly see how their perception of their behavior varies from others.
Rater selection can make or break the value of the information for a participant. There are several rules of thumb for selecting raters. We recommend that all direct reports are selected to prevent a biased selection of raters. It is better to err on the side of having more rather than fewer raters to support the relationship and communication benefits of doing a 360 well. All raters should have direct interaction with the participant and represent a range of both positive and negative perceptions. The participant’s leader can manage this by reviewing and approving the rater lists.
Individual reports need to have sufficient detail to show how the raters differ in their perceptions. There must be a sufficient number of raters in each group – stakeholders, peers and direct reports – in order to make meaningful interpretations. Recent research suggests that providing 2 or less raters per group is insufficient; inviting more raters ensures the accuracy to make 360 feedback findings relevant (Nowack & Mashihi, 2012). It has also been found that raters selected by participants were more accurate than those not selected by the participant; ideally, this decision should be made jointly by the participant and his/her manager (Bracken & Rose, 2011).
Rater training can be powerful in helping raters feel comfortable with the process and the intentions of the tool. Studies show that rater training also helps improve the accuracy of ratings. Typical content can be delivered within 15 minutes. Confidentiality and anonymity are critical topics, along with purpose and who receives the report. Raters receive examples of how to utilize the rating scale and should be instructed not to rate behaviors they have not had the opportunity to observe. It is important to review common errors including leniency, severity and central tendency. Leniency refers to the rater’s tendency to only give high ratings, while severity refers to the rater’s tendency to only give low ratings. Central tendency is simply the tendency of the rater to not commit to either a high or low end of the rating scale and to stick to the middle of the scale for the majority of their survey. Reviewing these tendencies will make the raters more likely to participate and more effective in providing ratings.
In addition to rating tendencies, there are natural biases in the perception of leaders that are helpful to review with raters. The halo effect is simply the tendency to view attractive individuals as more intelligent, likeable and humorous. This can bias peer and direct reports because the leader they are assessing is often more successful than they are, which can lead to the emergence of the halo effect. The primacy effect is the tendency to recall information/events shortly after exposure, while the recency effect is the tendency of people to recall the latest event or information that occurred. If a leader has just recently broken down and screamed at a rater, this will likely be over-represented in the 360 feedback. To receive accurate results, it is best to cover logistics and the competencies that will be assessed as well as how to write an open-ended response.
The communications process that accompanies the 360 survey is crucial for high participation raters. A rater information sheet that addresses everything a rater needs to know should be provided. Align the rater information sheet with instructions on the tool to prevent confusion. This, along with a communication that can be tailored by the participant requesting honest and candid feedback, should be sent by the participant to each rater.
During the surveying process, it is important to follow up to ensure that there are sufficient raters for each participant. It helps to administer the survey in waves when intact teams are participating. Rater fatigue is a real issue that can cause lowered response rates and inaccurate ratings. It is important to be mindful of the number of surveys being sent out to one particular rater. It is also common practice to limit the number of times an individual can be selected as a peer rater.
Running group and team reports can be a way to build comfort with the data and to allow teams to talk about development. Best practices for the 360 are started by using a group session with a sample leader or team report as a means to train participants in understanding the report structure and content, the natural defensiveness and the benefit of reading the report from front to back and then identifying the big picture themes throughout. Giving individuals an opportunity to reflect on feedback is a gift with the expectation to receive the gift with gratitude. The 360 assessment gives information that can be considered in the framework of a Johari Window. The Johari Window is a model used to develop trust and facilitate learning between people. According to the Johari Window model, a blind spot represents information that the participant isn’t aware of but other raters are. While blind spots are helpful to generate self awareness, understanding the ways in which leaders need to be more transparent also comes from the 360. The hidden area represents information the participant knows about his/her self but his/her peers don’t know.
A key problem for many in digesting their 360 is in integrating the results with other data and in turning it into a usable story about his/her self as a leader. A coach can be an invaluable partner in guiding this digestion process. Organizations that are serious about effectiveness ensure that a debrief session is delivered with the results. Participants can easily get stuck on a small point, fail to identify important themes or become fixated on a small, discrete criticism, thus missing out on more meaningful information for improvement. Thematic interpretation means that there is a story to be told by the high and low rated items This story may be consistent across rater groups and other times there is a rater group who sees the participant in a different light. For example, the boss may have high ratings because results are delivered and, at the same time, the direct reports are feeling neglected and overly pushed. In another case, the direct reports appreciate lots of attention and coaching while peers report low levels of collaboration. There are generally items that one rater group is in the best position to observe. Coaches can help bridge the report to action planning. Action plans are a critical deliverable from the 360 process. They should help drive organizational and individual career success if the areas for action are clear and linked to organizational goals.
Action plans should focus on approximately two to three development areas. When too many goals are selected, nothing will be accomplished. Goals must be specific, attainable and have some stretch for development growth. The goal setting literature suggests that having multiple goals is more effective than having one goal. According to Johnson, et al., 2012, leaders who set multiple goals were perceived as having greater performance improvement across competencies than those who set only one. Goals that are overlapping have been found to aid in overall behavioral change. For example, a development goal of being more influential will have synergy with a goal of leveraging collaboration as both can be practiced in a cross-functional project or assignment.
According to Baumann, 2001, the greatest improvement areas from the 360 assessments studied were in pre-existing areas of strength. Always looking at strengths to either expand them or to leverage them in overcoming challenges is a best practice for building positive action plans. This makes it much more likely that the participant will listen actively without becoming discouraged. Overwhelming a participant with negative evaluations can be maladaptive. Most will take the criticism personally; the resulting anger and distress often limits their ability to make constructive change. When coaching a leader who has troubled relationships, we avoid using a 360 process as an assessment tool for this reason.
Studies do show that 360s can result in performance improvement. 360s enhance performance and give direction for individualized development (Morical, 1999). Discrepancies between self and other’s ratings have been found to motivate the leader to make behavioral adjustments as long as they have a self-efficacy for development, which leads to improved performance (Brutus, Fleenor, and London 1998). Development plan topics correlate with improvements on future performance. In a study by Green & Brent, 2002, 360 participants were asked to provide feedback one year after the initial 360 test. The results indicated that 54 out of the 59 people surveyed had achieved individual, team and organizational improvement.
A best practice is that all participants should be encouraged to share feedback with direct reports. This helps build transparency and accountability for the development plan. A template that is designed to share their high level results and to engage staff in providing additional feedback and advice is shared with all participants. The participant needs to be open to this feedback and share what actions they will take as well as to be honest with any disagreement. For example, a leader who jumps to a quick and firm decision has asked his team to remind him when they experience him as moving too fast. He will then invite more dialogue or explain why the decision is firm.
Organizations need action planning tools, including on-line or printed resources, and specific sample actions that can be taken for every competency. Having examples and a place to start drives the completion rate of action plans. Action steps should be written down and formalized with a specific path to follow in order to attain a measurable goal. Participants should share his/her action plan with their manager to ensure that it is aligned where the manager sees the need for development. If coaching is being used, this is typically a three way meeting with the coach present.
Examples of Suggestions for Development Plans
Actively Listen
Goal: Actively listen to peers in meetings before responding
Potential barrier(s) to this goal: Tendency to interrupt people; my own ego
How will you overcome the barrier(s)? Put my thoughts aside and focus on others, trusting their credibility
Share what listening skills you are working on and ask one or two individuals to give you regular feedback.
Developmental suggestions: Model someone who is listening well; make a list of questions that a person uses to understand another person’s thinking and
perspective.
Monitor and manage your percentage of time in meetings for talking and listening
Resources: “What makes a leader?” by Daniel Goelman
Demonstrate Flexibility and Adaptability
Goal: Be more flexible with solutions to problems and project strategy
Potential barrier(s) to this goal: Lack of trust in others, stubbornness
How will you overcome the barrier(s)? Let others voice their opinions; think about and try ideas before dismissing them
Developmental suggestions: Develop a sense of humor, don’t take yourself too seriously. Let others voice their opinion first, then go on to paraphrase what they are
saying and reflect on it that way. This will help you focus your attention on the speaker and lets them know you are paying attention.
Resources: Transitions: Positive Change in Your Life and Work by Hopson, Barrie, and Scally, Mike, 1993.
The managers of the participating leaders should be educated both about the process and the leadership competencies that are being measured. It is critical that the manager completes the survey and be open to discussing his or her responses where they differ from the participant and other rater groups. The best practice involves a meeting to review the key messages and action plan with the manager who can then provide insight and support. However, managers need to be equipped to coach and to ask questions as opposed to being prescriptive. Managers should not ask to see the full report. If a manager penalizes a participant by reflecting the 360 in the performance review, he/she will lose the trust of the participant.
Finally, the 360 process can be measured and improved upon year over year. There are a number of ways that organizations can chose to do this. A feedback loop on the process should consider all involved - manager, participant, rater and coach feedback.
Suggested Feedback and ROI on the Process
Effectiveness Criteria Measurement
Participant leader satisfaction Surveying or polling the participating leaders about the value and supportiveness of
the process. We also recommend a web- based focus group to identify process improvement
from either a participant or manager perspective.
Rater response rate This can be broken down into overall response rates, number of comments. Items with low response
rates can be flagged for improvement.
Action plan completion rate % of completed action plans. Actions plans can be evaluated for quality.
Improvement Using a pulse or follow-up survey allows for individual improvement.
Overall improvements can be done with organizational or team aggregate reports.
The overall data from the action plans can be used to look at a larger scale of organization development. Looking at the data by division or even by position allows the leadership development department to incorporate the data into its planning and delivery.
While building a best in class 360 process means taking all of the elements into account, it is a highly effective organizational and leadership development tool because of the data it provides and the actions it drives. Like any tool, the way it is used determines the outcomes.
References
Morical, K.E. (1999, April). A product review: 360 assessments. Training and Development, 53, 43-47.Human Resource Development International, 2013 Vol. 16, No. 1, 56–73, http://dx.doi.org/10.1080/13678868.2012.740797Green, Brent; Organization Development Journal, Vol 20(1), Spr 2002 pp. 8-16. Publisher: Organization Development Institute; [Journal Article]http://www.coachfederation.org/files/includes/docs/080-360-Degree-Feedback-Listening-Paper.pdfhttps://www.envisialearning.com/system/resources/39/47-abstractFile.pdf?1269662893Nowack & Mashihi (2012). Evidence-Based Answers to 15 Questions about Leveraging 360-Degree FeedbackBracken & Rose (2011)