Logo VDS Automation
Logo VDS Automation

How do you send output if performance over shifts is not comparable?

How do you send output if performance over shifts is not comparable?

How do you send output if performance over shifts is not comparable?

“The night shift always performs better.”
“Yes, but they also get the easy runs.”
“We actually have the most adjustments.”

In many factories, the conversation about output has been the same for years. Not because people do not want to improve, but because the comparison between shifts is fundamentally skewed. And when you manage based on skewed comparisons, you get predictable effects: discussions, defensive behavior, and improvement actions that mainly cost energy.

The pain rarely lies in a lack of data. The pain lies in something else: you are trying to manage performance without making performance fairly measurable and explainable.

So the question is not: “How do I get more dashboards?”
The question is: “How do I make output so comparable that a conversation about performance automatically becomes a conversation about action?”

“The night shift always performs better.”
“Yes, but they also get the easy runs.”
“We actually have the most adjustments.”

In many factories, the conversation about output has been the same for years. Not because people do not want to improve, but because the comparison between shifts is fundamentally skewed. And when you manage based on skewed comparisons, you get predictable effects: discussions, defensive behavior, and improvement actions that mainly cost energy.

The pain rarely lies in a lack of data. The pain lies in something else: you are trying to manage performance without making performance fairly measurable and explainable.

So the question is not: “How do I get more dashboards?”
The question is: “How do I make output so comparable that a conversation about performance automatically becomes a conversation about action?”

The three underlying patterns

Although every production environment has its own dynamics, we see three patterns that recur in practice when shift performances are not comparable.

1. Output is measured without context

Many organizations simply compare “pieces per shift.” That seems logical, but it ignores the context that determines output:

  • product mix (fast vs complex variants)

  • setup and startup loss

  • planned stops, cleaning, meetings

  • occupancy and experience level

If you do not take that context into account, you operate based on false certainty. One shift is systematically deemed “better,” while in reality, they have a different set of conditions. And then output is not a steering instrument, but a source of friction.

2. There is no clear definition of “performance”

In theory, shifts work with the same KPIs. In practice, “output” often means something different per team, system, or report:

  • gross output vs good output (without rework)

  • included/excluded rework

  • output per shift duration vs per planned production time

  • “downtime” defined at machine, line, or operator level

Once definitions differ, the well-known reflex arises: “which numbers are correct?” And at the moment your organization is still discussing the measuring stick, you cannot steer towards improvement.

3. The comparison says what, but not why

Even if your output seems numerically comparable, the explanation is often missing. You see a difference, but you do not see:

  • where time was lost (downtime, microstops, speed)

  • which causes were dominant

  • whether it was influenceable within the shift

Then you make decisions based on assumptions. Improvement initiatives start, but lack focus. And if there is no clear cause–effect relationship, the same issues return in a different form.

Although every production environment has its own dynamics, we see three patterns that recur in practice when shift performances are not comparable.

1. Output is measured without context

Many organizations simply compare “pieces per shift.” That seems logical, but it ignores the context that determines output:

  • product mix (fast vs complex variants)

  • setup and startup loss

  • planned stops, cleaning, meetings

  • occupancy and experience level

If you do not take that context into account, you operate based on false certainty. One shift is systematically deemed “better,” while in reality, they have a different set of conditions. And then output is not a steering instrument, but a source of friction.

2. There is no clear definition of “performance”

In theory, shifts work with the same KPIs. In practice, “output” often means something different per team, system, or report:

  • gross output vs good output (without rework)

  • included/excluded rework

  • output per shift duration vs per planned production time

  • “downtime” defined at machine, line, or operator level

Once definitions differ, the well-known reflex arises: “which numbers are correct?” And at the moment your organization is still discussing the measuring stick, you cannot steer towards improvement.

3. The comparison says what, but not why

Even if your output seems numerically comparable, the explanation is often missing. You see a difference, but you do not see:

  • where time was lost (downtime, microstops, speed)

  • which causes were dominant

  • whether it was influenceable within the shift

Then you make decisions based on assumptions. Improvement initiatives start, but lack focus. And if there is no clear cause–effect relationship, the same issues return in a different form.

How do you make shift performances comparable?

Organizations that break through this seldom choose "more reporting." They choose a different way of managing, with three characteristics.

1. Normalize output to a fair basis

The most important step is surprisingly simple: stop comparing based on "units per shift" and create one fair basis to manage against. In many cases, that is:

  • good output (so without rejection/rework)

  • divided by planned production time (not the clock time of a shift)

Then you correct for mix differences. Not to make it pretty, but to make it fair. So that you no longer compare based on conditions, but on performance.

2. Make losses visible instead of just the outcome

Output is an outcome. To manage, you primarily need visibility on where the difference leaks. In virtually every factory, the loss resides in the same three KPIs:

  • availability (downtime, waiting, disruptions)

  • performance (microstops, instability, too low tempo)

  • quality (rejection, rework, startup loss)

Once you make those losses visible and consistently record them, the conversation changes. You no longer talk about who is "better," but about where you are losing time and which cause has the most impact.

3. Ensure follow-up: from signal to action

The biggest pitfall is that performance is insightful, but does not lead to follow-up. Then it becomes "looking at numbers" again instead of managing.

What helps is a simple shift-control-loop:

  • short check-in on target (normalized)

  • making top losses visible

  • documenting action with owner and deadline

  • feedback on effect

Not as an extra administrative layer, but as part of the daily rhythm. Then performance management becomes not a project, but a system.

Organizations that break through this seldom choose "more reporting." They choose a different way of managing, with three characteristics.

1. Normalize output to a fair basis

The most important step is surprisingly simple: stop comparing based on "units per shift" and create one fair basis to manage against. In many cases, that is:

  • good output (so without rejection/rework)

  • divided by planned production time (not the clock time of a shift)

Then you correct for mix differences. Not to make it pretty, but to make it fair. So that you no longer compare based on conditions, but on performance.

2. Make losses visible instead of just the outcome

Output is an outcome. To manage, you primarily need visibility on where the difference leaks. In virtually every factory, the loss resides in the same three KPIs:

  • availability (downtime, waiting, disruptions)

  • performance (microstops, instability, too low tempo)

  • quality (rejection, rework, startup loss)

Once you make those losses visible and consistently record them, the conversation changes. You no longer talk about who is "better," but about where you are losing time and which cause has the most impact.

3. Ensure follow-up: from signal to action

The biggest pitfall is that performance is insightful, but does not lead to follow-up. Then it becomes "looking at numbers" again instead of managing.

What helps is a simple shift-control-loop:

  • short check-in on target (normalized)

  • making top losses visible

  • documenting action with owner and deadline

  • feedback on effect

Not as an extra administrative layer, but as part of the daily rhythm. Then performance management becomes not a project, but a system.

What we see in practice

When organizations tackle their shift comparison this way, something interesting happens: there is calm. Not because everything suddenly goes perfectly, but because the conversation becomes more honest.

Teams recognize patterns faster:

  • which product mix poses structural risks

  • which changes disproportionately cost time

  • which disruptions keep returning

  • where "unknown loss" is actually a data problem

And above all: improvement initiatives become more concrete. Because you not only see that there is a gap, but also where and why. Then you can choose more targeted: tackle, standardize, train, automate, and then demonstrate the effect.

Often it turns out that you don't immediately need a large transformation. Most gains lie in sharpening definitions, normalization, and follow-up. Technology supports this, but is rarely the starting point.

When organizations tackle their shift comparison this way, something interesting happens: there is calm. Not because everything suddenly goes perfectly, but because the conversation becomes more honest.

Teams recognize patterns faster:

  • which product mix poses structural risks

  • which changes disproportionately cost time

  • which disruptions keep returning

  • where "unknown loss" is actually a data problem

And above all: improvement initiatives become more concrete. Because you not only see that there is a gap, but also where and why. Then you can choose more targeted: tackle, standardize, train, automate, and then demonstrate the effect.

Often it turns out that you don't immediately need a large transformation. Most gains lie in sharpening definitions, normalization, and follow-up. Technology supports this, but is rarely the starting point.

Finally

Those who want to manage based on output must first ensure that output is fairly comparable. Not to penalize shifts, but to help them improve.

For as long as you compare performance without context, without clear definitions, and without insight into losses, you are primarily managing based on noise. And noise always leads to the same: discussion instead of action.

Do you want to take steps as an organization in this? Then don’t start with yet another dashboard, but with one question:
“What is the fair benchmark for us to compare shifts, and which losses determine the difference?”

After that, managing suddenly becomes a lot easier.

Those who want to manage based on output must first ensure that output is fairly comparable. Not to penalize shifts, but to help them improve.

For as long as you compare performance without context, without clear definitions, and without insight into losses, you are primarily managing based on noise. And noise always leads to the same: discussion instead of action.

Do you want to take steps as an organization in this? Then don’t start with yet another dashboard, but with one question:
“What is the fair benchmark for us to compare shifts, and which losses determine the difference?”

After that, managing suddenly becomes a lot easier.

Receive the newsletter
Stay updated on the latest news, blogs, and more
More blogs
More blogs

Read our other blog posts

Read our other blog posts

Curious about how a training

Contact us and discover what VDS can mean for your organization in the field of Artificial Intelligence.

Curious about how a training

Contact us and discover what VDS can mean for your organization in the field of Artificial Intelligence.