Everyone Feared AI Would Replace the Expert. It’s Doing the Opposite.
--
When AI removes the transcription layer, the judgment layer becomes the constraint, and constraints capture value.
There is a moment every senior product person recognises. You sit down to write a PRD. You open a blank document. And you realise the hard part isn’t the writing — it’s knowing what to put in.
What belongs in scope and what doesn’t. Whether this use case warrants its own section or folds into another. Where the product delta for one customer segment is meaningfully different from the others, and where it’s just noise.
The writing is maybe twenty percent of the work. The other eighty percent is a series of judgement calls that require you to hold the entire system in your head simultaneously — existing corpus, incoming requirements, product variants, downstream dependencies — and decide, one piece at a time, where everything goes.
That’s not writing. That’s structured thinking at scale.
TL;DR
The common fear about AI in knowledge work is that it replaces the thinking. The reality is the opposite. AI removes the transcription layer — the effort of turning a decision into a sentence, a sentence into a paragraph, a paragraph into a consistent document. What remains is the layer that required expertise all along: deciding what’s true, what’s in scope, what the right structure is, and where the boundary sits.
That layer doesn’t get cheaper when AI handles the drafting. It gets more expensive — the decisions happen faster, the output volume is higher, and the cost of a wrong call compounds further down the pipeline. The domain expert doesn’t become less valuable. They become the rate-limiting factor.
The transcription layer and the thinking layer
Most knowledge work has two layers stacked so close together that people conflate them.
The transcription layer is turning what you know into text, writing the sentence, formatting the table and making the document consistent. Ensuring the terminology in section 3 matches section 7. Updating every instance of a product name that has changed.
The thinking layer is deciding what to write in the first place. Is this use case common across all five product flavours, or does it differ for renewal customers? Does this change request belong in the onboarding user story or eligibility? If a reviewer comment says “remove this from scope,” where does it go — out of scope, or into the roadmap?
These two layers have always been bundled together, which is why documentation has always been slow. Writers spend enormous effort on the transcription layer precisely because it is mechanical enough to feel productive and just hard enough to require concentration.
AI unbundles them. The transcription layer moves to the machine. The thinking layer stays human.
What the thinking layer actually looks like
This is worth making concrete, because “AI does the writing, humans do the thinking” undersells how demanding the thinking layer is.
Consider a single decision from a recent documentation exercise: a change request modified how an address field was parsed from a government identity service. The change touched three different user stories. In two, the new parsing logic belonged in the main flow — it applied to all product variants. In the third, it was an alternate flow that only triggered under a specific condition.
No automation tool can make that call. It doesn’t know the downstream architecture. It doesn’t know which user stories are shared across products. That knowledge lives in the head of someone who has spent time understanding the system.
The judgement call takes thirty seconds once you know the system. Writing it up correctly — updating three sections, making the cross-references consistent, retiring the old requirement — takes considerably longer. AI handles the writing. The thirty-second call is still yours.
Multiply that across a full documentation exercise — scope boundaries, namespace decisions, delta promotions, out-of-scope determinations —, and you have a picture of what the thinking layer actually contains. It is not a small residue. It is the entire substance of the work. The transcription layer was always padding.
Why domain expertise compounds
When AI removes the transcription layer, the thinking layer doesn’t shrink to match. It grows — because the increased output velocity means more decisions per hour, each with further consequences because downstream artefacts are generated faster.
In a traditional documentation cycle, a slow transcription layer throttles the thinking layer. You can only make as many consequential decisions as you have time to write up. Remove that bottleneck, and the bottleneck moves upstream — to the quality of judgement calls being made.
A wrong scope decision that used to take three weeks to propagate into a finished document now propagates in an afternoon. A misclassified use case that used to be caught during a slow review cycle now appears in a finished, well-formatted, internally consistent PRD before anyone has had time to question it.
This is why domain expertise becomes more valuable, not less, in an AI-augmented workflow. The speed amplifies both the good calls and the bad ones. The operator who knows the system well enough to make the right calls quickly is worth considerably more than before. The operator who makes the wrong calls quickly is considerably more expensive.
The operator’s job shifts from “review and correct” to “decide and verify” — checking polished output against a full system model rather than against a style guide. That is a harder job than reviewing a messy first draft. It requires deeper domain knowledge and faster reasoning. The reward is proportional: a good operator produces better output in a day than they used to in a week.
The AiTDP implication
At Trustt, our AI-augmented product development process — AiTDP — was originally designed around a downstream pipeline: PRDs in, Technical Requirement Documents out, code generation downstream of that, with human checkpoints at each handoff.
What running this at scale has made clear is that the quality of everything downstream is determined by the quality of thinking at the PRD stage. A PRD with a wrong scope decision produces a TRD with a wrong module boundary. A TRD with a wrong module boundary produces code with a structural defect. Cheap to fix at the PRD stage if the operator has the system comprehension to catch it. Expensive at the code stage.
The pipeline is only as good as its most upstream judgement call. AI makes the pipeline faster. The operator makes it correct.
This is not a caveat about AI limitations. It is a statement about where value creation actually sits in a knowledge-work pipeline. The transcription layer was never where the value was. It was just where the time went.
Now that the time has moved, the value is visible.
Two things to do differently on Monday
Redefine what senior product people are for. Not reviewing formatting or ensuring consistency — AI does that. They are there to make scope calls, validate structure, and catch wrong decisions before they propagate downstream. Design their time accordingly.
Invest in system comprehension as a team capability. The bottleneck in an AI-augmented documentation workflow is the operator’s ability to hold the full system in working memory. Architectural onboarding is not optional; it is the highest-leverage investment you make in a new PM.
The writing was never the hard part. We just spent a lot of time on it because we had no other option.
Now we do.
This is the second essay in a three-part series on AI-augmented product work in financial services. The first piece — “Spec Debt Compounds Faster Than Tech Debt” — looks at why documentation drift happens and the four-step workflow to fix it. The third piece — “The Compaction Test” — covers how to design AI-augmented workflows so the work survives any single session ending.