All posts
Speed Is the Moat: What Perplexity's Growth Team Taught Me About Building in AI
AIGrowthStrategyStartups

Speed Is the Moat: What Perplexity's Growth Team Taught Me About Building in AI

Forget network effects. Forget data moats. In AI, the only defensible advantage is moving faster than everyone else.

·Berkeley ProductCon, 2026
"If we are not ahead of their product roadmap, if they're copying us instead of the other way around, you can go ahead and pronounce us dead."

That was Perplexity's VP of Growth at the Berkeley AI Conference, answering the question every AI startup gets: how do you compete with OpenAI? His answer was disarmingly simple. Stay ahead. Ship faster. Be more accurate. There is no moat except speed.

I've spent 15 years watching product teams try to build defensibility. Data moats. Network effects. Switching costs. Platform lock-in. These are the classics from every strategy deck I've ever written or reviewed. And in the AI era, most of them are eroding.

The Perplexity team's framing was the most honest thing I heard all weekend. Their CEO, who spoke at Berkeley a few weeks prior, said the moat is speed. And his VP of Growth described exactly how that shows up operationally.

Hiring for Speed as a Personality Trait

When asked how they build a growth system that matches the pace of product changes, the Perplexity VP described three things. The first was hiring: "Do we hire people that, frankly, get their dopamine hits off of speed and being able to ship and being able to see that impact as quickly as possible?"

This is not a platitude. It's a hiring filter. They are literally screening for people whose neurochemistry responds to shipping velocity. I've hired dozens of PMs and I can tell you: the difference between someone who is comfortable with ambiguity and someone who is energized by it is enormous. Perplexity is selecting for the second type exclusively.

The second was focus. Knowing what to go all-in on and what to let play out. He was candid that "to a certain extent, it's a little bit of guessing and you hope your intuition gets you there." That honesty matters. In AI, nobody has enough data to make perfectly informed bets. The teams that win are the ones whose intuition is good enough to pick the right 3 out of 10 bets, and fast enough to kill the other 7 before they drain resources.

The third was organizational flatness. "Every single person at every layer is empowered. There is not a hierarchy of should I do this." The expectation is that you own your metrics and drive them aggressively. No permission-seeking. No approval chains.

The Unintuitive Experiment Results

The growth team's experimentation practice revealed something PMs should pay attention to. They run "an endless amount of experimentation" on answer quality and structure. What length of answer do users want? How should the UI present different types of information?

The results are consistently unintuitive. Cooking recipes? Users want long, blog-style answers. That runs counter to every "keep it concise" instinct a PM might have. Their onboarding flow? They've tested extensively whether to explain the product or just drop users in. The winning variant so far: people figure it out on their own.

This maps to something I believe deeply about AI product development: your intuitions from traditional software are wrong more often than they're right. The interaction patterns, the information density preferences, the onboarding assumptions, all of it needs to be re-tested. If your AI product's UX decisions are based on analogies to pre-AI products, you're probably leaving growth on the table.

Accuracy as a Long Game

Perplexity's competitive positioning was the most interesting of any company at the conference. They're not trying to be the most feature-rich or the most affordable. They're betting on accuracy.

"We are not here to make you feel good from every single answer. We're here to give you the most accurate answer to every question."

The VP made the case that 5-10% more accurate over a sustained period is the winning strategy. Not flashier. Not more features. More correct. This is a bet on compounding trust. Every time a user gets a better answer from Perplexity than from a competitor, the switching cost goes up, not because of lock-in, but because of earned confidence.

For PMs, this is a useful framework. In AI products, the "aha moment" isn't a feature. It's the first time a user trusts the output enough to act on it without verifying. Everything you do in product development should be working toward that moment.

The Founder-Mentality Hiring Bar

When asked what they look for in hires, the Perplexity VP said: "Who's the underdog? Who's going to take big risks? We hire a lot of founders." He elaborated: "Are they willing to join the tiny company that is going up against Google? Have they tried to spin something up of their own and failed gloriously?"

"Failed gloriously" was the exact phrase. Not "failed gracefully" or "learned from failure." Gloriously. The implication is that they want people who have swung hard, missed, and are ready to swing again.

This is a hiring signal that I think applies beyond Perplexity. AI startups are moving so fast that the traditional interview signals, polished case studies, structured frameworks, methodical thinking, can actually be negative indicators. What these companies need are people who can make a call with 40% of the information and iterate their way to the right answer.

What This Means for Your Product Career

If speed is the moat, it has implications for how you build your own career in AI product management:

Ship things. Not decks. Not strategies. Things. Perplexity's VP was clear: they rarely hire for defined roles. They hire people who can "take on a lot of stuff" and grow into whatever the company needs next. The best way to demonstrate that is a track record of building and shipping, even if what you shipped was small or imperfect.

Develop your intuition through volume. The Perplexity team's good judgment comes from running a massive number of experiments. You can do the same thing at a smaller scale. Build prototypes. Test assumptions. Run your own experiments. The PM who has tested 50 ideas and killed 40 of them has better intuition than the PM who spent six months researching the perfect one.

Get comfortable being wrong fast. Multiple speakers at Berkeley described a world where plans are obsolete within months. Adobe's PM leader said the roadmap from January feels irrelevant by November. If your process requires certainty before action, you'll be too slow.

Build for accuracy, not features. If you're working on an AI product, resist the urge to add capabilities. Instead, make the core thing more reliable. The product that users trust is the product that wins, and trust is built through consistent accuracy, not a longer feature list.

The AI product landscape is going to consolidate. Speed determines who's still standing when it does. That's true for companies, and it's true for the people building inside them.