Underdogs vs High Performers: When Improvement Beats Excellence
Organizations talk a lot about rewarding excellence. In reality, they reward change. One who moves from average to good gets more attention than one who has quietly delivered at a high level for years
Most IT organizations have a pattern that is obvious if you look for it, but almost no one says it out loud.
The underdog who improves in public gets the praise, the promotion, the story. The steady high performer, the one who delivers without drama, fades into the background. Trusted, relied on, and eventually taken for granted.
It sounds fair enough. Growth deserves recognition. Effort should matter. Everyone likes a story with an arc. But look closer and it is less about fairness, more about how the system chooses what counts as performance.
Performance systems are not built to weigh underdogs against high performers. They are built to flatten messy, conflicting signals into a single story. And then reward whatever is most visible at the time.
Why this is not a people problem but a system problem
The real issue is not bias in the usual sense. It is structural confusion.
We ask one system to judge output and effort, consistency and improvement, visibility and impact, short-term delivery and long-term thinking. Each pair is a real trade-off. None of them can be averaged away, but most organizations act as if they can be compressed into a single number or a tidy conversation.
In practice, the system just grabs whatever signal is easiest to see at review time. In knowledge work, that signal is almost never the most accurate one.
This is how performance management quietly becomes a storytelling exercise.
Output vs effort: why activity wins over impact
Take output versus effort. In theory, results matter more than activity. In practice, results in software are hard to pin down. Work is collaborative, dependencies are everywhere, and some of the best contributions are the ones you never see: problems that never happen, incidents that never escalate, systems that quietly get more stable.
Faced with all this ambiguity, managers reach for proxies. Responsiveness, visible urgency, number of tickets touched, presence in meetings. These signals are not useless, but they are much easier to spot than the real impact.
So the person who quietly removes a whole class of production issues goes unnoticed, while the person who fights incidents in public channels becomes a hero. The system does not reward chaos on purpose, but it reliably rewards the attention chaos creates.
Consistency vs improvement: why stability becomes invisible
The same distortion shows up with consistency versus improvement. High, stable performance has a strange weakness: it becomes the baseline. Once something is seen as normal, it stops drawing attention. No one tells a story about the thing that just works, every time.
Improvement, on the other hand, is always visible. It has direction, contrast, and emotional pull. Someone who moves from average to good creates a story that is easy to tell and easy to remember.
Over time, this shifts the balance. Organizations start to overweight trajectories and underweight reliability. High performers are not punished, but they stop standing out. Their contribution is assumed, not examined.
Assumption is a fragile kind of recognition.
Visibility vs impact: when performance becomes influence
Visibility versus impact is where the system quietly turns political. Every organization says it values impact, but few invest in making it measurable. What is left is visibility: who speaks, who presents, who shows up at the right moments, who can explain their work well.
Communication matters. But when it replaces evidence, the rules change. Performance reviews become contests of influence, where a clear story can outweigh real contribution.
This is how glue work — mentorship, documentation, reliability, cross-team coordination — either disappears or gets misread. It is critical to the system, but unless someone tracks it on purpose, it loses out to more visible output.
Short-term vs long-term: the trade-off nobody tracks
Short-term delivery versus long-term capability is the tension that usually causes damage later. Shipping features and hitting deadlines create immediate, measurable results. Investing in architecture, reducing technical debt, or mentoring juniors pays off slowly and rarely fits into a single review cycle.
So the system does what it is built to do: it prioritizes what it can measure right now.
Teams get efficient in the short term and fragile in the long term. Learning slows, complexity grows, and eventually delivery starts to slip. When that happens, the usual response is more pressure on delivery, which just repeats the cycle.
Why adding more metrics usually makes it worse
At this point, the instinct is to fix performance by adding more metrics, more dashboards, more reviews. In reality, this usually makes the problem louder, not smaller.
The real issue is that performance systems are asked to do two incompatible jobs at once. They are used for administrative decisions — compensation, promotions, rankings — and at the same time for development, coaching, and growth.
These jobs need different signals and different interpretations. Admin decisions want stability and comparability. Development needs trends, potential, and context. When both are forced into the same box, the system gets inconsistent and easier to game.
Why high performers burn out in “fair” systems
There is another, less comfortable pattern here. High performers get more demand because they are trusted. At the same time, their output becomes the new normal, and recognition fails to keep pace with expectations.
This creates a familiar imbalance: high effort, high responsibility, low visible reward. Over time, this is a reliable path to burnout. Not because people are weak or overcommitted, but because the system slowly disconnects effort from recognition.
What a more coherent system actually looks like
A better approach does not start with new metrics or frameworks. It starts by admitting that performance is multi-dimensional, and that different dimensions belong at different levels.
Some signals make sense at the team or system level: delivery speed, reliability, stability. Others belong to the individual: decision quality, ownership, ability to simplify, contribution to team capability.
Once you separate these, evaluation stops being about squeezing everything into one score. It becomes about building a coherent, evidence-based view of contribution. Invisible work is made visible on purpose. Context is documented, not assumed. Sustainability is treated as part of performance, not an afterthought.
Back to underdogs vs high performers
The tension between underdogs and high performers does not disappear in this kind of system. But it becomes explicit, manageable, and less about who tells the better story.
And that is usually the difference between a performance system that feels political and one that feels fair.


