360-Degree-Feedback #2: How to Choose the Right Raters and Competencies
Once you’ve decided to implement 360‑degree feedback, important decisions lie ahead.
- Who will assess the individual?
- What exactly will be evaluated?
- How will you ensure the process is fair and useful?
These questions may seem technical, but how you answer them will determine whether your project succeeds or fails.
Who will provide feedback
Choosing the raters is your first critical decision. You might think: “That’s easy—we’ll select their manager, a few colleagues, and done.” In reality, it’s more nuanced, because each rater group brings a unique perspective to the process.
Supervisors – a strategic view
A direct supervisor knows the evaluated person’s work goals, responsibilities, and expectations tied to the role. They see how the person contributes to team and company objectives, how they deal with new challenges, and how they grow. Their feedback is often the most structured and closely aligned with business results. For example, for a sales manager, the supervisor can assess how effectively they plan team activities, communicate corporate strategy, or handle customer conflicts—perspectives only the supervisor can reliably provide.
Peers – day‑to‑day reality
Colleagues on the same level observe the person in action daily. They know how they communicate in meetings, share information, react to stress, or support others. Their feedback is often the most authentic, since it comes without power dynamics between boss and subordinate. For instance: “We had a project manager who his supervisor described as a great communicator. But peer feedback revealed that although he presented results well, he was impatient during everyday collaboration, didn’t listen to others’ suggestions, and frequently changed requirements without consultation. That insight was key to his development.”
Direct reports – bottom‑up insight
If the evaluated person leads a team, feedback from their direct reports is invaluable. Only those below them see their real leadership style—how they delegate, motivate, support, and shape the work environment. This bottom‑up perspective often reveals issues that would otherwise remain hidden. It’s crucial that subordinates are guaranteed complete anonymity. If they even slightly suspect their manager might identify who wrote what, they won’t be honest. Dishonest feedback is worse than none.
Self‑assessment – a mirror of self‑perception
Self‑assessment isn’t just a “completeness formality.” It’s a vital part of the process because it allows comparison between how someone sees themselves and how others see them. These differences—called a “gap analysis”—often yield the most valuable insights. For example, if someone rates themselves much higher in communication skills than their peers do, it can signal a “blind spot” — an area they’re unaware of where they have the most room to grow. Conversely, if they rate themselves lower than others do, it may point to untapped potential or low self‑confidence.
How many raters are optimal?
I often hear: “How many people should rate one person?” The answer isn’t simple—it depends on several factors.
General guidance:
- 1 direct supervisor
- 3–5 peers
- 3–5 direct reports (if applicable)
- plus self‑assessment
Why these numbers?
Fewer than 3 raters in a category (e.g., peers) doesn’t support anonymity. You’d have to merge results with another category or drop them entirely. More than 5–6 often doesn’t bring new insights but lengthens the process needlessly.
Tip: More raters ≠ better outcomes. Too many raters (say, over 12–15 per person) create unnecessary administrative burden without delivering qualitatively better feedback. The key is finding a balance between breadth of perspective and practical feasibility.
What to evaluate – selecting competencies
This is perhaps the most critical decision of the entire process—and where many organizations make a fundamental mistake. I often see 360° assessments trying to cover everything—from technical skills to leadership. The result? Overloaded raters and superficial feedback.
A two‑level approach to competencies
Here’s a strategic method that’s proven effective in practice:
Level 1: Role‑specific competency model
Start by developing a full competency model for each role, covering technical skills, industry knowledge, and soft skills. Use this model for recruitment, performance reviews, and career development.
Level 2: 360° focus – cultural competencies
For the 360° feedback questionnaire, include only the soft skills that impact team and organizational culture. Focus on competencies that:
- affect team atmosphere and collaboration
- shape how people communicate with each other
- determine how conflicts and feedback are handled
- build trust and psychological safety
The power of 360° for soft skills
360° feedback uniquely captures behavioral qualities that are often critical to team success but hard to evaluate.
For example, you can assess a developer’s technical skills by code review—but how do you measure if they:
- share knowledge with juniors?
- give constructive peer‑review feedback?
- communicate complex technical issues clearly?
- proactively suggest process improvements?
These are the “soft elements of professional competence.” A technically skilled developer lacking these may even hinder their team.
Real‑world example: Sloneek’s cultural competencies
We defined five main cultural competencies for all employees:
Team Collaboration
- Participates actively in team work
- Contributes to team objectives and success
Communication & Feedback
- Communicates clearly, with respect and openness
- Provides useful, concrete, and constructive feedback
- Contributes positively to team atmosphere
Reliability & Consideration
- Respects others’ responsibilities, time, and effort
- Keeps commitments and is dependable
Initiative & Ideas
- Proactively proposes ideas and improvements
Effort & Motivation
- Shows dedication and sustained effort at work
For leaders, we add a Leadership section with role-specific leadership competencies.
Why these competencies?
Each one directly impacts how our team functions as a whole—not just individual performance but building a culture where:
- people help each other and share responsibility
- feedback is seen as a gift, not an attack
- reliability fosters trust
- initiative drives progress in the organization
How to define behavioral indicators
For each competency, define 2–3 concrete behavioral indicators.
- ❌ Poor: “Communicates effectively.”
- ✅ Better: “Communicates clearly, with respect and openness to others.”
- ❌ Poor: “Provides feedback.”
- ✅ Better: “Offers feedback that is useful, concrete, and constructive.”
Optimal scope
Stick to 4–6 main competencies, each with 2–3 behavioral indicators. The questionnaire should have no more than 15 scaled questions, plus a few open-ended ones. More becomes burdensome to raters and leads to superficial insights.
Pro tip: Less is more. Organizations that started with 10–12 competencies (trying to “cover everything”) often got vague comments. After reducing to 5–6 cultural competencies, feedback quality dramatically improved. Raters had the energy to be specific and constructive.
Practical example: competencies for mid‑level management
People Leadership
- Gives clear and understandable instructions
- Delegates tasks appropriately based on team members’ abilities
- Motivates the team even in difficult situations
- Provides constructive feedback
Communication
- Actively listens to others’ viewpoints
- Expresses ideas clearly and understandably
- Adapts communication style to different people
- Resolves conflicts constructively
Performance Management
- Sets clear and achievable goals
- Regularly monitors team progress
- Recognizes and rewards good performance
- Deals with performance issues promptly and effectively
What to avoid
- Overly academic competencies – Avoid definitions employees don’t understand. Instead of “proactive initiation of strategic synergies,” write “comes up with ideas to improve.”
- Evaluating personality traits – 360° feedback should focus on behavior, not on who someone is. ❌ “Is introverted.” ✅ “Actively participates in discussions.”
- Too broad a scope – The enemy of good is perfect. Better to focus on fewer competencies and deliver a quality assessment than try to cover everything superficially.
In the next part of this series, we’ll explore practical implementation: planning the process, training raters, and ensuring everything runs smoothly.