AI didn’t replace intelligence, It commoditised it.
“AI doesn’t threaten intelligence; it exposes how little judgement most systems have.”
I recently came across a post from an old colleague, reflecting on AI and questioning whether intelligence and knowledge may no longer define who we are.
That thought lingers because it hints at something deeper. When what once made us distinct becomes abundant, the question quietly shifts. It is no longer about machines and their capabilities, but about where human meaning, value, and responsibility actually reside.
This is not a story about AI replacing humans. Rather, it is about value moving to places machines cannot inhabit.
The Identity Crisis
For most of modern history, intelligence and knowledge were the currency of relevance, which made them a convenient place to anchor identity.
If you knew more than others, you mattered. If you could solve problems others could not, you were safe. Entire careers were, therefore, justified by being “the expert in the room”.
AI disrupts that comfort. Not suddenly, but decisively.
For the first time, intelligence is no longer scarce. Knowledge is no longer accumulated slowly through years of exposure. Instead, it is generated, recombined, and delivered on demand. As a result, what once distinguished us increasingly resembles a utility.
This shift is not merely technical.
It is personal.
People are not afraid of machines being smarter. Rather, they are afraid of losing the thing their identity was attached to. They fear replacement less than they fear becoming ordinary.
Devaluation Before Replacement
The first visible impact of this shift is not replacement. It is devaluation.
Code still matters, but writing it is no longer a strong signal. Architecture diagrams still exist, yet generating them has become trivial. Analysis is everywhere, which inevitably makes any single analysis less impressive.
As this plays out, experience begins to blur. Juniors produce work that looks senior, while seniors feel their edge shrinking. Being “smart” becomes harder to see, and being “experienced” less impressive on its own.
Meanwhile, decision-makers struggle to tell quality from plausibility. Professionals start to feel interchangeable, and experience loses its immediate authority.
This tension produces strange reactions.
Some deny the shift and dismiss AI as “just a tool”. Others overuse it and mistake speed for thinking. Many feel threatened but cannot articulate why, which is why they cling to complexity, jargon, and over-engineering. Not because it helps, but because it restores a sense of control.
The real loss here is status. Effort no longer maps cleanly to value, and value no longer follows effort. Visibility, therefore, no longer guarantees relevance.
That discomfort cuts deepest in professions that built prestige on expertise.
From Knowledge to Judgement
This is where the reframing must happen.
Our value was never in knowledge itself. It was always in what we chose to do with it.
As output becomes cheap, value shifts upstream, away from execution and towards intent.
From delivery to curation.
From answers to direction.
From execution to judgement.
AI can generate options, but it cannot own consequences. It can optimise locally, yet it cannot reason systemically. It can assist decisions, but it cannot be accountable for them.
Judgement, therefore, is not intelligence.
It is intelligence that accepts responsibility.
Machines do not carry blame. They do not explain decisions to boards, customers, or teams. Nor do they absorb second-order effects years later.
In software architecture, this distinction becomes obvious. The hard part is not producing a design, but understanding constraints, incentives, failure modes, long-term cost, and organisational reality. Often, the most valuable decision is knowing when not to build.
That requires taste.
Taste is not aesthetics. Rather, it is the ability to look at ten acceptable solutions and say, “This one will age badly.” It is understanding trade-offs instead of pretending they can be eliminated. It is recognising fragility before incidents force hard lessons.
That burden, despite everything, remains ours.
In the positive scenario, architects and senior experts are no longer paid to produce more artefacts. Instead, they are paid to prevent the wrong things from being built.
Because of that understanding, they are involved earlier, when decisions are still cheap. Success looks quieter. Fewer diagrams. Fewer heroic rescues. Fewer “we’ll fix it later” moments. Their value lies in framing irreversible choices, exposing trade-offs, and making implicit risks explicit.
Organisations, in turn, pay a premium for judgement when decisions are expensive to reverse.
When failure modes are asymmetric, and one wrong decision costs far more than others.
When technical choices carry operational, regulatory, or reputational consequences.
In such environments, execution is not the bottleneck. Direction is. Judgement matters precisely because mistakes compound.
There are, however, scenarios where judgement is not, and should not be, highly valued such as proofs of concept, short-lived tools, or low-risk experimentation.
When decisions are reversible and failure is cheap, speed beats deliberation. Recognising this boundary keeps the argument honest.
DIKW+J
This shift becomes clearer when viewed through how we traditionally model understanding itself.
The classic DIKW model ends with wisdom. Wisdom explains reality, but it does not change it.

What it misses is judgement.
Judgement is the act of choosing despite uncertainty. It is where understanding meets values, constraints, and responsibility. Two teams can share the same wisdom and still make opposite decisions, both rational, because judgement integrates things data cannot: risk tolerance, organisational reality, and long-term cost.
AI compresses the lower layers of DIKW at scale. It accelerates data processing, pattern recognition, and even wisdom-like outputs. What it cannot do is own a decision.
Judgement is not intelligence.
It is commitment.
And commitment, by definition, requires someone willing to carry the consequences.
From Sailors to Navigators
For years, we trained ourselves to be excellent sailors.
We learned frameworks, patterns, tools, and best practices. We optimised routes, improved speed, and handled storms better than before.
AI is an exceptional sailor, tireless, fast, and increasingly precise.
But sailors do not choose destinations.
Navigators choose direction, define purpose, and weigh whether the destination is worth the cost. When the map is wrong and the sea changes, responsibility lands with them.
AI accelerates sailing, but navigation remains human.
The mistake is competing with machines on the sailor’s job and then being surprised by the outcome.
This shift is not comfortable.
There will be fewer people needed for pure execution. The remaining roles will demand more context, more judgement, and greater exposure to ambiguity. As a result, there will be less hiding behind output and more standing behind decisions.
Judgement does not eliminate risk. It compresses uncertainty into conscious, owned choices.
Judgement does not scale easily, which limits how many can operate at that level. Operating there is harder.
The architects, engineers, and leaders who thrive will not be those who produce the most, but those who see the system, understand incentives, and can explain why this path makes sense.
Competing with machines on the “what” is a losing strategy. The only durable advantage lies in mastering the “why”, in understanding intent, consequences, and direction rather than output.
That, in turn, requires investing in systems thinking, judgement, and taste. Not as soft skills or personal preferences, but as capabilities that determine whether complex systems hold or collapse.
This shift does not erase meaning from work. It concentrates it, by removing the comfort of execution and replacing it with responsibility.
The result is less comfort, more responsibility, and ultimately, more impact.
A recalibration many of us were overdue to face, and one that will ultimately define who remains relevant in an AI-saturated world.