AI, Performance, and the Illusion of Simple Decisions
A CEO who announces to their people “AI will replace you within a year” got one thing right: they were remarkably transparent about how they view their people’s work. The rest was a mistake.
An Announcement Is Not a Strategy
Something strange keeps happening. Leadership steps in front of their people and says that within a few months, a significant portion of the work will be replaced by AI. Sometimes openly, sometimes through the grapevine, sometimes between the lines. But the message is clear.
I don’t question this as a strategic imperative. Companies should be thinking about efficiency and the future of work. The problem lies elsewhere.
The relevance of such a move rests on at least three things:
- Whether the strategy genuinely permeates the organization – not as a polished document, but as each person’s understanding of their own role in delivering on it.
- Whether we actually know which specific parts of the work we’re changing – which processes, which competencies, and to what extent.
- And whether we have at least a basic picture of what new demands AI will bring into the organization.
If these three things are missing, the announcement doesn’t say “we’re changing our operating model.” It says “your work has no value.”
That is a very poor starting point for rapid change.
Trust Is Not Rebuilt by Another Announcement
We’ve seen this before. Many companies made sharp cuts and then discovered that both capacity and quality were gone. They started hiring back. The same roles. But by then, trust had been broken.
And without trust, an organization doesn’t move faster. Quite the opposite. People take fewer risks, initiate less, invest less in their own development. Precisely the things that are critical for AI transformation begin to disappear.
Trust is built slowly and lost quickly. And you won’t get it back with another all-hands where you praise everyone.
A Plausible Mistake Is Worse Than an Obvious One
This is where I want to go – it interests me more than the announcements themselves.
AI dramatically increases the ability to produce outputs. Most noticeably among less experienced people. Suddenly it’s possible to generate more text, analyses, and proposals in less time and with higher surface-level quality. But accuracy, relevance, and real-world impact remain unstable. And harder to verify.
A new risk emerges. The output looks good. It arrives quickly. It meets formal expectations. And for exactly that reason, it easily passes through without genuine scrutiny. An obvious mistake triggers a review. A plausible mistake slips through. Because it doesn’t raise suspicion.
Paradoxically, the probability increases that an organization will start making worse decisions even as it produces more and better-looking work. And this undermines one of the core assumptions on which most performance evaluation frameworks are built.
Output Has Stopped Being a Reliable Signal
Most HR methodologies and frameworks rested on a simple shortcut: the more consistently someone delivered, the more senior they were. Output was relatively expensive to produce, which made it a decent proxy for competence.
But if AI lowers the cost of production, its informational value drops too.
If we don’t change anything in response, we’ll systematically start rewarding the ability to generate convincing artifacts over the ability to make sound decisions. And in doing so, we’ll quietly erode the quality of the entire organization. Slowly, invisibly, while everything appears to be running fine.
The fair basis for evaluation becomes something different: the quality of decision-making and the degree of accountability. Not what someone delivered, but how they got there. How they work with uncertainty. How they identify risks. Or how they verify outputs. Which decisions they’re willing to own.
Output is a by-product. The real product is the decision.
This isn’t just a theory. You only need to look at what’s actually happening.
A software development company decided to cut costs on external tools by $750 a month and replace them with a home-built “vibe-coded” application. The result? Token costs grew to $4,300 a month. And as a bonus, the team now spends half its time fixing bugs in something that was supposed to be more efficient than the original solution.

Another example: a user on a paid plan for a vibe coding service – Claude Code at $200 a month – generated token consumption worth $23,000 in a single month. We don’t know how many similar cases exist. What we do know: companies building their cost models on current AI tool pricing are betting on conditions their vendor can change overnight. And then the entire “cheap solution” logic collapses.
These examples share a common thread. On the surface, they look like efficiency gains. In reality, they show that replacing human work with AI without understanding the context often leads to worse outcomes than the original solution. And the problem isn’t the technology. It’s the decision about how to use it.
Yet most companies haven’t reflected any of this in how they measure people’s performance. And competency models are exactly where it hurts most.
Competency Models That Were Obsolete Before Anyone Finished Reading Them
Competency models were historically stable. Updated once every few years. That’s no longer enough.
What used to mean “able to write” or “able to analyze” is being partially transferred to AI. Value is shifting toward problem formulation, working with context, designing human-AI collaboration, and above all, verifying outputs. If your competency model doesn’t include these things, you’re evaluating people against criteria that no longer reflect what they actually do.
Annual updates are now the minimum. For AI-exposed roles, quarterly reviews of changes are entirely reasonable.
The Greatest Risk Is Not Replacement. It’s the Erosion of Expertise.
AI boosts the productivity of less experienced people far more than it does experienced ones. This creates quiet pressure: why invest in experts when “performance” can be delivered without them?
But experts are the ones who catch subtle yet critical mistakes. Who understand context. Who know when AI fails. If they disappear, an organization can appear to function well for a long time.
Until the moment it makes a serious mistake – a plausible one, smoothly worded. And nobody in the room recognizes it.
Protecting deep expertise cannot be left to chance. It has to be a deliberate decision.
Without a Competency Map, Every Announcement Is Just a Guess
It’s not enough to know what roles exist in the company. You need to know what actual capabilities, skills, and knowledge are present- where they live and how quickly you can develop or supplement them.
Otherwise, the decision to “replace with AI” never becomes a strategy. It remains a guess.
AI doesn’t kill work in itself. But it very efficiently kills poor performance proxies. Organizations that stick with the old shortcuts will start making worse decisions even as they produce more. Those that understand the shift will start managing something different: not output, but decision-making.
That is where real organizational performance is determined in the age of AI.
Questions Worth Asking
Instead of a conclusion: a few questions. If you can’t answer most of them, the announcement about AI transformation may have come too soon.
- Why do I have specific people in my organization, and where exactly do I see their added value in a world with AI?
- What do I actually consider performance today – volume of output, or quality of decision-making?
- Where in how I evaluate people do I reward “output polish” more than genuine verification of correctness?
- How quickly can I change how my organization operates, and what will actually enable that change sustainably – not just once?
- How often do I update my picture of what competencies the organization needs?
- Where am I deliberately protecting deep expertise, and where am I unconsciously replacing it with “good enough” output?
- Which competencies in my model are today just proxies for something AI can already partially automate?
- And finally: do I actually know how my organization arrives at its decisions or do I only see their outputs?



