
Steve Yegge’s “8 levels” chart gets repeated online as a ladder of tools:
IDE → agent → orchestrator.
But that might miss what Steve was actually trying to show.
The levels describe a day-to-day operating model for engineers: how much you trust the agent, how you review, and where you spend attention—on code diffs, on agent actions, or on orchestration (task decomposition, coordination, and verification).
Below is a practical, engineer-facing interpretation of the levels—plus a “taste” / time-horizon perspective from my conversation with Steve.
This is the part most people underestimate.
Traditional dev work centers on producing and reviewing code. Agentic work gradually moves you toward supervising a production line:
If there’s one phrase that describes the ladder, it’s:
Diff reviewer → agent supervisor → team orchestrator
This becomes increasingly rare in fast-moving teams—not because it’s “bad,” but because throughput norms shift.
This is “AI as a better assistant,” not “AI as a worker.”
This is where teams see big wins—and where silent quality drift can begin if verification is weak.
Less about code, more about supervision: “Is the agent doing the right things?”
You’re no longer coding—you’re specifying outcomes and verifying results.
This is where multiplexing becomes addictive—there’s always another agent to spin up.
This is where people start asking: “How do I coordinate all of this?”
Your job becomes less “writing software” and more building the production line that builds software.
One theme kept returning in my conversation with Steve: the gap between locally plausible output and globally good engineering.
Agents have improved significantly—especially at execution. But that progress doesn’t eliminate the gap; it shifts where it shows up.
When generation gets cheaper and faster, the cost of a wrong direction compounds sooner—but the cost of starting without a clear direction also drops.
Which is why humans remain the long-term compass.
More output → more need for judgment.
In previous waves of tooling, speed exposed bad thinking faster. Now, it can temporarily hide it. It becomes easier to ship plausible changes faster than a team can sense long-term consequences.
Time-horizon thinking becomes product quality: “Is this the right abstraction for the next 6 months?”
And context remains the hard limit. Agents don’t naturally carry your organization’s history, constraints, and tradeoffs unless you explicitly build that loop with specs, reviews, and quality gates.
The failure mode shifts: from “can’t do it” → to “can do it in the wrong direction”
This idea connects closely to something I’ve written about before: platform engineering as omakase.
In fine dining, omakase means trust—“I’ll leave it up to you.”
But that trust only works because the chef has taste built through years of feedback and experience.
The same applies here. As we delegate execution, the role shifts toward curating outcomes and earning trust through judgment.
A moment from my conversation with Steve stayed with me:
Models are strong “in the now.”
Engineers build judgment across time.
That’s why senior engineers don’t just review what changed—they evaluate what it will do to the system over time. This is why, when engineers talk about tech debt, they don’t just describe how the system looks today. They also consider:
At higher levels of AI adoption, this matters more—not less.
Because mistakes compound faster when generation is cheap.
If “taste” is the long-term compass, what happens when speed no longer forces you to use it?
As teams move up the levels, the biggest shift isn’t just in tools—it’s in how engineers think and work.
The hidden shift is this:
from intentional engineering → to endless iteration
Agents are getting better at generating code, wiring systems, and executing tasks. But that progress is easy to misread.
It’s tempting to overestimate what the agent contributes—and underestimate what still holds everything together.
Because the real value engineers provide doesn’t disappear.
It moves:
from writing code
→ to choosing direction
→ to setting constraints
→ to maintaining coherence over time
When generation becomes cheap, iteration starts to feel like progress—even when direction is unclear.
There’s always one more prompt. One more refinement. One more “almost right.”
So intention gets deferred—because it’s no longer required to make progress.
Decisions get postponed.
And engineers gradually shift: from designing systems → to supervising outputs, not because they choose to—but because the system no longer forces early clarity.
This is where the role quietly changes.
Not a sudden replacement—but a slow change in posture:
from owning judgment → to managing iteration
The system keeps moving fast.
But clarity starts to erode.
So the risk isn’t immediate replacement.
It’s that, long before that happens, engineers begin to operate like the systems they supervise:
optimizing for speed, responsiveness, and continuous output—
while intention, structure, and long-term thinking fade into the background.
At higher levels of AI adoption, that’s exactly the work that matters most.
The ladder doesn’t just change what you do:
diff reviewer → supervisor → orchestrator
It changes how you think.
And that’s where things can go wrong.
The system speeds up.
The thinking thins out.
And that’s how local speed turns into system-level drag:
more output, less understanding.