Blue background with a leaf and a questionnaire to measure employee satisfaction

March 26, 2026


The other half of evaluations: What evaluators reveal

When analysing a performance evaluation, we usually ask the same question: how has this person performed?

We look at the final score, compare it to the average, identify high performers and those who need support. Everything seems clear… until you stop to consider something uncomfortable:

What if a significant part of that result isn’t about the person being evaluated, but about the evaluator?

At Hrider, we’ve been observing evaluations from this other angle for some time. And what emerges when you examine evaluators with data is, at the very least, revealing.

The Evaluation Also Measures the Evaluator

No evaluator is neutral.
We all bring our own way of understanding good performance.

Some managers are very demanding, others more permissive. Some value effort, others only the final result. Some use the full scale, and others always stay in a “safe” zone. All of this is reflected, even if unconsciously, in every evaluation.

When you don’t analyse this effect, you assume the score is an objective measure of performance. When you do, you discover something different:
the evaluation is also an expression of leadership, context and culture.

When Teams Seem Different… but Aren’t

One of the first patterns that usually appears is the existence of demand silos among supervisors.

Teams with apparently low results, when viewed in context, are simply being evaluated by tougher managers. Other teams may seem to shine, but they do so under much more lenient criteria.

The problem is not performance.
The problem is comparing things that are not aligned.

Without this perspective, the organisation risks rewarding or penalising people based on who evaluates them, not how they actually perform.

The Manager’s True Satisfaction Leaves a Trace

There is another, even more interesting insight:
the evolution of how a supervisor evaluates their team over time.

When a manager systematically lowers ratings, or when variation among people increases, it is rarely coincidental. It often reflects pressure, burnout, frustration, or contextual changes that may not appear in any formal report.

Analysing the evaluator allows you to hear what the leadership is saying with data, even when it is not verbalised.

Objectives, Results… and What Really Matters

Another key point emerges when crossing objectives with competency results.

Some supervisors keep competency scores high even when objectives are not met, because they value effort, complexity or context. Others do the opposite: excellent results, but restrained evaluations because the standard is very high.

There is no right or wrong style.
What exists is highly valuable information about how performance is understood within the organisation and how decisions are truly made.

The Invisible Risk: Unintended Inequity

When the evaluator effect is not analysed, evaluation systems can generate inequities without anyone intending it.

Equally valuable people can receive different career paths simply because they fall under different evaluation contexts. This affects promotions, compensation, development plans, and perceptions of fairness.

Looking at the evaluator is not about policing them.
It is about improving the system so that decisions are fairer, more consistent, and smarter.

Mini Training Plan for Evaluators (Actionable from HR)

When you analyse evaluators, the next natural step is to help them evaluate better. A long theoretical programme isn’t necessary: a practical approach works far better.

1. Awareness of Own Pattern

Share with each evaluator a simple summary of how they evaluate: average level of demand and differences relative to the group. Seeing their own data is usually the greatest learning.

2. Alignment of Criteria

Work with real examples to answer a key question:
What do we understand in the organisation as correct, good, and excellent performance?
The goal is not to standardise people, but expectations.

3. Effective Use of the Scale
Help evaluators lose the fear of using the full scale when justified, and avoid concentrating everything in the mid-range out of inertia.

4. Objectives and Results
Clarify how to evaluate ambitious objectives, how to interpret effort versus results, and how to avoid systematically rewarding or penalising poorly calibrated objectives.

5. Data-Driven Review, Not Opinions
Provide review sessions where aggregated patterns and relevant deviations are analysed, relying on data rather than hierarchy or intuition.

How We Do It at Hrider

At Hrider we have specific tools to analyse evaluators, detect demand silos, understand the relationship between objectives and results, and read these patterns clearly and actionably, without technical complexity.

All integrated directly into performance evaluation, feedback and People Analytics processes, so HR and managers can make better decisions based on real data, not intuition.

Because evaluating is not just giving a score.
It is understanding what that score really means.