The Impersonation Problem
Parkinson's Law says work expands to fill available time. AI now lets work expand to fill whatever a language model can generate — which is limitless.
I first noticed something wrong when a colleague replied to me using Claude. The em dashes, the rhythmic structure, the confident grasp of technologies he clearly didn't understand. I debated correcting him, then stopped. He wasn't really on the other side of the conversation.
Generative AI produces work that looks expert without being expert. There are two failure modes. First: novices produce senior-level work faster than their judgment allows. Second: people generate artifacts in disciplines they never trained in. The second is riskier and less studied.
Cross-Domain Generation
People who can't code are building software. People who never designed a data system are designing data systems. Most of it doesn't ship — it's built for hours, shown internally with vigor, used quietly, occasionally surfaced to a client.
I have a colleague, not an engineer, who spent two months building a system that needed formal data architecture training. He produced code, documentation, progress. But he couldn't explain how any of it worked. The schemas were wrong from day one. Several of us knew. When we spoke up, even to a VP, he fought back. His managers were too invested in momentum to hear it. The work continues until a stakeholder decides not to invest.
The tool didn't make him a worse colleague. It made him able to impersonate a discipline for months. Institutional incentives bent toward letting him continue.
The Conduit Problem
Research confirms what users know: models are 50% more agreeable than humans (Cheng et al., Science 2026). AI-literate users overestimate their performance (Berkeley CMR). Novice productivity jumps a third; experts barely benefit (NBER). Overconfident novices, unable to review correctness, produce work that looks right.
This is output-competence decoupling. Previously, work quality signaled competence. Novice code crashed in novice ways. AI severs that link. The person becomes a conduit, routing output they can't evaluate.
Producing work and judging it were distinct. But doing the work taught judgment. Now machines do the producing, and fewer bother acquiring judgment.
Slop on the Inside
Requirements documents that were one page are now twelve. Status updates balloon. Every artifact gets elongated by people who don't read what they produce, for readers who don't read what they receive. The cost of producing is near zero; reading costs are rising. Readers must sift synthetic context for the original signal. Each elongation seems rational and is rewarded — readers trust longer AI explanations even when wrong. The collective effect: signal is harder to find.
This is a new form of slop, more expensive because people are salaried. The pipeline of future experts thins: the work that taught judgment is done by tools, and entry-level roles are cut. Result: lots of motion, little creation.
What to Do
Discipline looks old-fashioned. Use the tool where you can verify its output. Never ask a model for confirmation — it agrees with everyone.
Generative AI excels where feedback is fast, approximate is fine, and the human remains final arbiter: drafting memos, generating examples, summarizing material you could verify. The human supplies judgment; the tool supplies throughput. That's human-in-the-loop, not human-as-conduit.
For firms: trust is a competitive advantage that has appreciated. Competitors are converting into content pipelines. Deloitte already refunded $440k over an AI-hallucinated report. The reckoning will come. Firms doing real work will charge for it. Those hollowed out will discover they sold the thing clients were paying for.
Expertise is being asked to look the other way — deliver faster, integrate tools deeper, get out of the way of colleagues "getting things done." Artifacts accumulate. Work does not. And somewhere a client opens a deliverable and may actually read it.


