Avoiding Competitive Destruction from AI
One of the big topics in the venture ecosystem is “where will value accrue” in AI.
Us infra-maximalists say it will be at the infra layer, if not now then in a few years. Others say value will mostly go to existing companies with distribution, as evidenced by MSFT’s ability to raise prices on Office due to Copilot. Still others say the winners will be upstarts focused on vertical applications.
But what if all those answers are wrong?
There’s a quote from Charlie Munger (full speech here) that highlights a real risk for all of us in the venture ecosystem - that the value generated by AI will go almost entirely to end consumers:1
The great lesson in microeconomics is to discriminate between when technology is going to help you and when it’s going to kill you. And most people do not get this straight in their heads. But a fellow like Buffett does…
For example, when we were in the textile business, which is a terrible commodity business, we were making low-end textiles—which are a real commodity product. And one day, the people came to Warren and said, “They’ve invented a new loom that we think will do twice as much work as our old ones.”
And Warren said, “Gee, I hope this doesn’t work because if it does, I’m going to close the mill.” And he meant it….
And he knew that the huge productivity increases that would come from a better machine introduced into the production of a commodity product would all go to the benefit of the buyers of the textiles. Nothing was going to stick to our ribs as owners.
That’s such an obvious concept—that there are all kinds of wonderful new inventions that give you nothing as owners except the opportunity to spend a lot more money in a business that’s still going to be lousy. The money still won’t come to you. All of the advantages from great improvements are going to flow through to the customers.
Conversely, if you own the only newspaper in Oshkosh and they were to invent more efficient ways of composing the whole newspaper, then when you got rid of the old technology and got new fancy computers and so forth, all of the savings would come right through to the bottom line.
Comparing software to textiles and newspapers may sound crazy, but hear me out.
Let’s say you’re a growth-stage company that has spent years developing its product and you’ve followed all the standard advice - capture the workflow, focus on your unique insight, etc. And that has worked great until now! But now, by building around an OpenAI API, a new startup can achieve 50% parity within weeks. With the barrier to entry significantly reduced, your head start has vanished almost overnight.
Another example: your product has a manual component, whether around data aggregation/entry, communication, whatever. You start to leverage some new AI-based solution that you believe will improve your gross margin by 2% overnight. Sounds amazing! But as Munger points out, if you can drive efficiency that quickly through an invention sold by someone else, others can too. So unless you have a massive existing distribution that you now serve more efficiently, you don’t gain a competitive advantage.
In both scenarios, like in Munger’s anecdote about textiles, your long-term pricing power and potential for sustainable differentiation have decreased, while consumers benefit from increased choice and price reductions over time.
This all leads to an existential question: If the foundation model providers are swallowing more and more of the application layer, are most of the products that we’ve held up as unique and critical (or software itself!) in fact commodities in this new world order? If they are, the competitive destruction from AI is far greater than most of us are currently imagining.
To step back from the doom and gloom, I think Munger, in his analogy about Oshkosh, also presents a few paths that reduce the potential for competitive destruction:
Own the customer relationship (distribution)
Focus on a specific market (vertical application with existing budget)
Be the one selling the machinery (infrastructure or mission-critical applications)
The Oshkosh analogy assumes that newspapers continue to exist, which parallels the debate over whether AI is a sustaining or disruptive innovation. But ignoring that for a second, my hypothesis is that companies need to combine at least two of the three above paths to succeed moving forward because there is too much competition to get away with relying on only one:
Own the distribution in a specific market -> favors the incumbents, especially at the application layer, who can leverage AI to transform their product and improve internal efficiency.2
Mission-critical applications/infrastructure for a vertical market -> can benefit incumbents and startups, where startups can differentiate through deeper integration and accelerate with AI-based features.3
Have the widest distribution for your horizontal infrastructure -> most receptive to fundamental technological innovations.4
But if AI changes everything, shouldn’t these conditions apply to all enterprise startups? I think so, which is why I’m internalizing the following for companies without massive pre-existing distributions:
If you’re going vertical (user, market, etc.), it’s important to:
Deeply integrate your application into every system your customers use
Solve the unsexy or unprofitable problems that other companies won’t go after
Go down-stack enough to be foundational infrastructure for that vertical5
Both collect and generate proprietary data
Emphasize efficiency and profitability earlier in the company’s lifecycle
If you’re going horizontal, it’s helpful to:
Solve a problem that applies to every single company
Have an element of high technical risk (“Can it be built?”)
Emphasize adoption across teams and drive collaboration6
Have a simple GTM pitch7
Leverage the same data across multiple products8
Dramatically reduce configuration time so that users can spend their time pushing the product to its limits
There’s no “right” way to do things, and perhaps I’m force-fitting observations into my preexisting bias for infra, but I believe that the above conditions will be critical for building a durable business moving forward. Let me know what you think!
Thanks to Alexander Krey, Cack Wilhelm, Kenn So, and Zach Cherian for their early feedback on the ideas here, and PVZ for crystallizing my points.
And Nvidia, shoutout Michael
This can include fast-moving startups like Jasper
Legal is getting all the buzz, but I’m personally excited by the prospects of “real-world” applications in Construction, Energy, and Industrials
The foundation model providers have done this, and we’re starting to see some exciting work from companies like Foundry and Together at the compute aggregation layer, but it remains to be seen who else in the AI ecosystem has the potential to do so. Within infra today, some companies taking that approach include Neon, Momento, Tailscale, Cribl, and Aiven
Our portfolio company Cortex is a great example of this
There are some topical (and fascinating) lessons from the early days of Datadog in Matt Turck’s interview with Olivier Pomel
Benn Stancil has an interesting anecdote about the Snowflake pitch in his post on The end of Big Data
Crowdstrike is a perfect example here. More to come in a future post about going multi-product