Blue background with puzzle pieces

May 7, 2026


How to Design a Performance Evaluation When Not the Entire Workforce Operates the Same Way

There is a widely repeated principle in people management: what is not measured cannot be improved. It sounds logical, almost unquestionable. But in practice, there is a second part that is often forgotten: what cannot be collected simply cannot exist in the analysis. And this is exactly where many organizations risk making decisions without even realizing it.

It is a bit like trying to take a group photo while leaving half of the people out of the frame. The image may seem correct, even sharp, but it does not represent the full reality. In the field of performance evaluation, this happens more often than we think, especially when office-based employees coexist with operational teams that do not have regular access to digital tools.

Don Norman, in The Design of Everyday Things, explains this from a different perspective, but with an idea that is highly applicable in this context. When a system is not designed for the reality of an organization, it means that it will only work for some people while failing others entirely. The paradigm is clear. When participation is not homogeneous, data stops being a faithful reflection of the organization. And when data does not represent everyone, decisions begin to resemble well-presented intuition rather than evidence-based strategy. Deloitte’s Human Capital Trends reports have consistently pointed out that the value of people analytics depends not only on the volume of data, but also on its quality, integration, and representativeness across the organization.

For this reason, at Hrider we want to show you how to design a people evaluation process when not everyone in the workforce operates under the same conditions.

1. Identify whether your evaluation is designed for office-based employees or for the entire organization.

Before thinking about tools, there is an uncomfortable question you need to ask yourself: can everyone participate under equal conditions?

In many industrial organizations or companies with broad operational networks, the answer is only partial. Not because employees do not want to participate, but because the process assumes continuous digital access.

2. Avoid participation bias as the starting point of your analysis.

If only those with easy access respond, the data stops representing the organization and begins to represent only a subset of it. This has a direct consequence: the analysis inherits a bias that often goes unquestioned.

In People Analytics, this is especially critical because the quality of decisions depends directly on the quality of the input data.

3. Design a single process, not multiple parallel versions.

One of the most common mistakes is fragmenting evaluations by employee group. At first glance, this may seem logical, but in practice it introduces complexity, inconsistencies, and a loss of comparability.

The strongest approach is not to create separate processes, but to design a single system capable of adapting to different access contexts.

4. Adapt the channel, not the content.

The key lies in defining what questions to ask and how to present them appropriately. It is not only about what you ask, but how you ask it.

In mixed environments, this may include:

  • Email access for administrative profiles.
  • Direct mobile access for employees without a digital workstation.
  • Removing complex credentials or entry barriers.
  • A seamless experience without the need for prior training.

The goal is not full digitalization, but reducing friction.

5. Integrate metrics instead of separating them.

One common mistake is treating eNPS as a process separate from performance evaluation.

However, when integrated into the same workflow, its analytical value increases significantly.

  • It allows you to connect perception and performance.
  • It identifies patterns across teams or managers.
  • It detects correlations between leadership and employee experience.
  • It provides context to individual results.

When data points are connected, they stop being merely descriptive and become interpretative.

The difference between collecting data and understanding reality

In the end, the problem, as we often say, is not measuring too little. More often, it is measuring only those who are easiest to hear.

When part of the workforce is excluded from the process, evaluation stops being a tool for understanding and becomes a partial interpretation of reality. Deloitte’s studies on human capital trends have emphasized for years that trust in data depends not only on volume, but also on the ability to reliably represent how people work, participate, and evolve within an organization. The quality of analysis does not begin when data is interpreted, but much earlier, in the way it is collected.

At Hrider, we have previously discussed how performance evaluation should be understood as a strategic tool rather than merely an operational one. In this article, we provide a broader guide on how to conduct a performance evaluation. The difference lies in something simple yet decisive: designing processes that adapt to the real organization rather than the idealized one imagined from an organizational chart.