In search of the right funding structures for the AI era
In the traditional SaaS model, you pay high upfront development costs and then amortize them over as many users as possible at relatively low marginal cost. AI changes this relationship because while it's easier to go to market, the economics get harder as you scale, since your most engaged users are consuming the most inference and driving the highest variable costs. These “tokenomics” run in the opposite direction of SaaS economics, and it's not yet clear you can capitalize tokens the way you could capitalize software development.
The obvious counterargument is that inference costs are falling fast, roughly 10x per year, and that this problem resolves itself on a timeline well within a normal venture fund's horizon. But cheaper inference doesn't actually reduce your total spend per user because as compute gets cheaper, people use dramatically more of it. Lower costs unlock new use cases, heavier usage patterns, and broader deployment across an organization, so the per-token cost falls but total inference spend per user rises. Your user cohorts remain expensive even as the unit economics of any individual call improve.
In effect, you're running down an escalator.
And this compounds with a simultaneous competition problem that SaaS never faced at the same intensity: when the cost of building software approaches zero for everyone, feature differentiation compresses and you have to defend lifetime value through workflow integration and data advantages rather than product surface area. The companies that build those moats can eventually price on value delivered rather than tokens consumed, but getting to that position requires sustained investment through a period where your unit economics are negative and unpredictable, which is a fundamentally different problem than SaaS companies faced when unit economics were negative but followed a quickly well-understood curve.
A fair objection here is that this dynamic doesn't describe all AI companies equally. Many AI businesses are building on inference as a commodity input and competing primarily on data, distribution, and workflow integration, where the marginal cost dynamics look much more like traditional SaaS. The financing problem I'm describing hits hardest at companies where user engagement directly scales inference consumption, not companies where AI is an embedded feature within a product that charges on seats or outcomes. But the companies where this dynamic is most acute are also the companies building the most transformative products, the ones where the AI is the product rather than a feature within it, which means the financing gap sits precisely where the most consequential value creation is happening.
This creates a financing question that the traditional venture and SaaS playbooks aren't well-suited for: how do you fund successive tranches of user cohorts over a long enough horizon to realize returns from productivity gains while your cost structure is still stabilizing and your users are consuming more, not less, as the technology improves?
In SaaS, you could afford to be unit-economics-negative because your SG&A stayed flat or predictable and you were amortizing across users with a repeatable GTM motion. The mature version of that playbook looks like General Catalyst's Customer Value Fund, where they deploy non-dilutive capital into a predictable go-to-market motion, you acquire the users, and as profits materialize you pay it back. That model depends on the payback curve being modelable, which in AI it often isn't, not because the returns won't come but because your cost base is a moving target that shifts with every improvement in the underlying models as usage expands to fill whatever capacity the efficiency gains create.
So the question now becomes: how do you produce a return on deployment when the path to stable unit economics runs through a longer and less certain period than venture has historically been willing to fund, and where the cost structure keeps evolving as usage patterns expand in response to every efficiency gain?
I think the winners in this space will be growth equity and slightly-above-growth-equity investors alongside long-duration seed investors, the capital that's structured to underwrite a 7-10 year path to margin stability rather than a 3-5 year path to an exit, and that can stay patient through the period where falling per-unit costs and rising aggregate consumption are working against each other before workflow moats and value-based pricing eventually stabilize the equation.
As a result, I think the capital stack for AI companies needs to be rebuilt from scratch rather than adapted from SaaS playbooks.
Venture has always adapted its instruments to new dynamics: participating preferred for capital-intensive hardware, YC’s innovative SAFE for speed at seed, Pipe’s (RIP) revenue-based financing for SaaS. Why can't it adapt again? I think the answer is that those adaptations were modulations on a shared underlying assumption, that unit economics would become predictable within a fund's deployment horizon, and the AI margin convergence problem violates that assumption at a level where incremental term sheet evolution isn't sufficient.
You're not adjusting the terms of a bet whose structure you understand; you're trying to underwrite a fundamentally different payoff curve.
The winners on the investor side will probably be the ones who design new instruments rather than the ones who stretch existing ones. The venture model doesn't just need longer timelines, it needs different return expectations, different signaling frameworks for follow-on decisions, and different LP relationships. And the debt question that may be looming in the back of your mind isn't really "should debt come in earlier" but rather "what new instrument has the patience of debt, the risk tolerance of equity, and the cost-structure awareness of an infrastructure provider," because that's what the underlying dynamics actually demand.
You need an instrument that absorbs equity-like risk during the period when unit economics are unstable, transitions toward debt-like predictability as the company matures, and gives the investor a payoff profile that's calibrated to margin stabilization rather than revenue growth, because revenue growth in AI can be a misleading signal if it's being driven by usage patterns that are compounding costs faster than pricing power develops.
The instrument essentially needs to be a bet on the spread between cost deflation and usage expansion converging into stable margins, not a bet on top-line scale.
The base layer would be preferred equity with no fixed coupon or repayment obligation, but with terms that shift dynamically based on observable margin and cost-structure triggers. Think of it as a convertible preferred where the conversion terms aren't fixed at issuance but are functions of how the company's unit economics evolve over time. We're actively working on structuring something in this direction as I write this.
There are real implementation questions here that I haven’t fully resolved though; how do you get inference providers to write compute collars at meaningful scale when they're also navigating their own cost uncertainty? How do you make the margin-based triggers robust enough that neither party can game them through accounting choices? And the legal complexity of an instrument with this many embedded options and contingent conversion paths is nontrivial, which means transaction costs could be high enough to make it impractical for smaller investments. The structure probably only works above a certain check size, maybe $10-15M minimum, where the complexity is justified by the capital at risk.
Not great.
The other open question is whether LPs would allocate to a fund structured to deploy this kind of instrument, because it doesn't fit neatly into any existing asset class bucket. It's not quite Venture, it's not credit, it's not quite structured products in the traditional sense. You'd probably need to raise a dedicated vehicle with a thesis-specific LP base, which is itself a fundraising challenge.