At the Berkeley AI Conference, Box CTO Ben Haines said something that stopped me cold: "A really good programmer used to be very good at writing code. Nowadays, a really good programmer is very good at managing agents to write good code."
He wasn't being abstract. He described his bleeding-edge engineers arriving at work, kicking off ten to twenty agents simultaneously, then spending the rest of their day evaluating output. Cancel this branch. Revert that one. This one is good, keep going. The skill isn't writing code anymore. The skill is managing a fleet of autonomous workers who don't have feelings but absolutely have opinions about how to interpret your instructions.
I've spent 15 years managing product teams. Cross-functional pods, remote teams, design sprints with strong personalities in the room. And the single biggest insight I took away from Berkeley is this: managing AI agents uses the exact same muscle as managing people, just without the emotional overhead.
The Parallels Are Not Metaphorical
When Haines was asked about the difference between managing agents and managing humans, his answer surprised the room. He said it's "weirdly similar." Give an agent vague instructions, you get vague output. Give it sophisticated context, you get sophisticated results. Same as any direct report you've ever had.
Here's where it gets interesting. He noted that saying "please" actually changes model output. That giving the agent background information, the way you'd brief a new team member, produces measurably better work. And that if you give it "really stupid instructions, it'll give you a stupid result, because it's like, it must be like what? The kind of thing I know how to talk to people like you."
That last part is critical. The models are trained on human communication. They respond to the same patterns that humans respond to. Clear objectives. Relevant context. Explicit success criteria. These aren't prompt engineering tricks. They're management fundamentals.
The Micromanager's Revenge
Here's a take I didn't expect to hear from a CTO: micromanagers might finally have their moment.
Haines pointed out that one of the biggest upsides of micromanaging agents is that they don't get mad at you. You can be as detailed, as demanding, as specific as you want, and the agent just does the work. No passive-aggressive Slack messages. No one-on-ones to repair the relationship. Just output.
I've always been the kind of PM who leans toward tight feedback loops and high-context communication. In people management, there's a fine line between that and micromanagement. With agents, that line disappears. The PM who provides exhaustive context, checks output frequently, and redirects quickly isn't a micromanager. They're an effective agent operator.
This is a real competitive advantage. If you've spent years developing the ability to give clear briefs, decompose ambiguous problems into concrete tasks, and evaluate whether output matches intent, you are already better at agent management than most engineers.
What This Means for PMs Right Now
The conference made one thing clear: agent management is not a future skill. It's a present one. Multiple speakers noted that the people who are struggling with AI agents aren't lacking technical knowledge. They're lacking management reps.
Haines was blunt: "If you can't get it to do something that you want, then probably you're not managing it properly." He compared it to the early days of Google, when certain people were dramatically better at searching for information. Those people were temporarily very valuable. Then everyone caught up. The same thing is happening with agents right now, and the window to build the skill is open.
Here's what I'd tell any PM who wants to get ahead of this:
1. Treat agent sessions like 1:1s with a new hire. Give background. Explain the context of the project, not just the task. Share what good output looks like. The more you front-load, the less you correct on the back end.
2. Build your management reps. The only way to get good at this is volume. Don't wait for the perfect use case. Start with the messy, low-stakes stuff. Draft a competitive analysis. Synthesize a batch of user interview transcripts. Rewrite a PRD in a different voice. The goal isn't perfect output. It's building your intuition for how to direct, evaluate, and redirect.
3. Document your agent playbook. Every good manager has a system for onboarding people. Build the same thing for agents. What context do they need? What format do you want output in? What are your quality criteria? I keep a project in Claude loaded with my performance reviews, my writing style, my org's priorities. That's not a prompt library. That's an onboarding doc.
4. Get comfortable with parallel management. The engineers Haines described are managing ten agents at once. PMs should be thinking the same way. Kick off a market analysis in one thread, a PRD draft in another, a competitive teardown in a third. Evaluate in parallel. This is portfolio management, and it's a skill PMs already have.
The Uncomfortable Truth
The part nobody said out loud at Berkeley, but everyone was thinking: if you can't manage agents well, your value proposition shrinks dramatically. Not because agents replace PMs. They don't. But because the PMs who can manage agents will operate at 5-10x the throughput of those who can't. And hiring managers know it.
Perplexity's VP of Growth said they hire people who "get their dopamine hits off of speed." Intercom's head of solutions said they've restructured their team three times in one year to keep up with the pace of AI. OpenAI's enterprise lead used ChatGPT study mode to learn an entire industry vertical in four months and close a major partnership with UCSF.
The speed advantage isn't theoretical. It's already showing up in hiring decisions, promotion velocity, and team structure.
If you've spent years learning how to give clear direction, evaluate output quality, and course-correct fast, you're not behind the curve on AI. You're ahead of it. The question is whether you're putting those skills to work on agents, or waiting for someone else to figure it out first.
