Your AI Agent Doesn't Need a Better Prompt. It Needs a Better Manager.

A hand stamping APPROVED on a flawed document without looking

A few months ago, I was on a coaching call with a software engineering lead in Istanbul. He’d just gotten a new director, someone from a product management background who’d never written a line of production code. The director was smart, polished, had all the right credentials. But three months in, the team was miserable.

“He can’t tell the difference between something that’s hard and something that’s slow,” the engineer told me. “He looks at the output and says ’looks good’ because he literally cannot tell whether it’s good.”

I’ve heard versions of this story for thirteen years. In Amsterdam, Berlin, Seattle, across hundreds of coaching sessions, and nearly two decades at consulting. The pattern doesn’t change. When a manager hasn’t done the work, the team feels it. Not because the manager is stupid. Because they can’t evaluate what they’re looking at.

Here’s what’s strange: the entire AI industry is about to reproduce this exact failure at massive scale, and almost nobody is framing it that way.

The common wisdom right now is that AI eliminates the need for domain expertise. Why learn to design when an agent can design for you? Why understand finance when AI can analyze the numbers? The promise is that anyone can orchestrate knowledge work without knowing how the work actually gets done.

That promise is wrong. And we already know why, because we’ve been watching the same dynamic play out between managers and their teams for decades.

The Mirror

There’s a reason people complain about managers who’ve never done the work. It’s not snobbery. It’s functional.

A good manager orchestrates. They set direction, evaluate output, course-correct, push back on bad estimates. The best managers I’ve worked with have one thing in common: they’ve done the work at some point. Not necessarily recently. But they’ve been close enough to it that they know what good looks like. They’ve produced it.

When that’s missing, specific things break. The manager can’t tell the difference between a genuinely hard problem and procrastination. They can’t evaluate whether a solution is elegant or just functional. They can’t push back on timelines because they have no internal model for how long things should take.

The team figures this out fast. And they either exploit it or resent it. Usually both.

Now think about what happens when you put someone in front of an AI agent and ask them to orchestrate knowledge work they’ve never done.

Same dynamics. Same failure modes.

If you don’t know the domain, you can’t tell whether the AI gave you something brilliant or something that merely sounds brilliant. You can’t catch when it’s confidently wrong. You can’t ask the follow-up question that turns a generic response into a useful one. You just accept the output, like a director who rubber-stamps whatever his engineering team delivers because he can’t read the output.

The AI calibrates to you. Ask a surface-level question, get a surface-level answer. Ask with precision, using the right terminology, challenging assumptions, flagging edge cases, and the conversation goes somewhere completely different. The person with domain knowledge doesn’t just get better answers. They get a different AI.

Polished Garbage

This gets worse before it gets better.

We’re heading toward multi-agent systems with a human in the loop. We are building one at work right now. The human orchestrator decides which agent handles what, when to override, how to reconcile conflicting outputs from different models.

This is management. Pure management. And if the orchestrator doesn’t understand the domain, the system starts producing what I’ve started calling polished garbage, outputs that look right, read well, and are wrong in ways that only someone who knows the work would catch.

I think about this every time someone tells me you don’t need to learn anything anymore because AI knows everything. That argument confuses having access to answers with knowing which questions to ask. Knowledge isn’t a warehouse you raid. It’s a filter. It’s the thing that lets you spot when an answer is subtly wrong, when a confident summary is hiding a critical gap, when the output is 90% right in a way that makes the remaining 10% dangerous.

A manager who has never done the work can’t pattern-match against what good looks like.

Neither can an AI orchestrator.

Who Wins

The people who will get the most out of AI agents are, counterintuitively, the people who could have done the work themselves. Maybe not as fast. Maybe not at scale. But they understand the terrain well enough to manage it.

The people who will get the least out of AI agents are the ones who think AI removes the need to understand the work. They’ll get output. It’ll look polished. And they won’t know it’s wrong until it’s too late, which, if you’ve ever watched a non-technical manager sign off on a broken architecture, should sound familiar.

That Istanbul engineering lead I mentioned at the top? His director eventually approved a system design that looked clean on paper but collapsed under load in staging. No one who’d actually built distributed systems would have signed off on it.

We’re about to watch that happen with AI at organizational scale. Not because the tools are bad. Because the people managing the tools don’t know what they’re managing.

Your AI agent doesn’t need a better prompt. It needs someone who understands the work.

Daron Yondem advises senior technology leaders on AI-driven organizational transformation. Learn more →