AI-first Service Businesses

Opportunities for new AI-first software applications

Louis Coppey
Point Nine Land

--

Since chatGPT/LLMs emerged, a lot has been written about opportunities to build new software applications in an AI-first world. A lot has also been written about the defensibility of these new players, considering, on the one hand, how easy it has become to build an AI proof of concepts with the newly available AI APIs, and, on the other end, the competitive advantages that existing, well-established, SaaS players have.

I wrote about both topics in 2017 before the LLM/genAI wave (see “Winning strategies for Applied AI companies” and “Routes to Defensibility for your AI startup”). Most of what I wrote back then more or less still applies. Access to unique data and talent, traditional SaaS barriers to entry, and focusing on niches not well addressed by larger SaaS incumbents can lead to strong competitive advantages.

That being said, we are only starting to understand the new market opportunities opened by new AI capabilities.

After reading a few posts (especially Sarah Tavel’s “Sell work, not software”) and brainstorming on new opportunities at Point Nine, one opportunity has become clearer. In contrast to 2017, foundation models might perform well enough to build full-stack, AI-first service businesses that will ultimately look like software businesses.

What does that mean? By constraining the problem or focusing on a niche, a new company can leverage foundation models and industry-specific data to rebuild a service business from the ground up that is so automated that it becomes scalable. Its P&L will ultimately look more like one of a software than a service business.

Why is it interesting?

1/ Value needs to be built at the application layer

As Sequoia wrote last week (AI’s $200Bn question), following Nvidia’s financial results and the recent investments in AI infrastructure like GPUs, the only way to justify such investment is to create an enormous amount of end-user consumer or business value through end-user applications. Sequoia estimates that, at the current run-rate of $50bn of GPU revenues per year, $200bn of lifetime revenue must be generated at the application layer every year. Some of this money invested in infrastructure will be badly spent and lead to no value creation. That is fine and is part of the innovation process. Existing software companies with end-user data and distribution power will capture some of the value. But some value will also be created by new startups, including some with a disruptive value proposition: “selling the work” as opposed to “selling software”.

The illustration above, from the USV blog, illustrates the link between the infrastructure and the application layers, over different innovation cycles, before the recent LLM wave.

2/ Selling the work isn’t the same as selling AI-powered SaaS features

It’s increasingly clear that “selling the work” is a different opportunity than adding AI features, on top of an existing SaaS application. Why?

First, selling a service and selling software are different value propositions. Intercom is selling AI features as an improvement of its Customer Support software. An AI-powered customer service business will sell CS services and guarantee a certain level of service.

Second, SaaS incumbents are focused on adding AI features that will ultimately benefit their whole user base. It is not in their short-term business interest to focus on a particular segment or use case to automate it fully. Some SaaS incumbents may also be stuck with a per-seat business model that they will struggle to compromise if they start to sell AI agents that replace end users. For both reasons, going after this opportunity might lead to a typical “Innovator’s dilemma”: an existing player would need to cannibalize its existing business to innovate.

3/ The operational playbook to build an AI-first SaaS service differs from a typical SaaS playbook

Building on the previous point, a new player entering the market as a full stack AI-first service business can constrain the problem and hire service agents to:
i) Understand end users’ problems very well — iterating with employees will be much faster than iterating with customers,
ii) Have “humans-in-the-loop” to get to the right level of quality of service,
iii) Collect data that is highly specific to the industry and use case.

You can do that if you serve only a handful of customers on a well-defined problem. It’s much harder (not to say impossible) if you need to serve a user base with varying use cases and sizes. Let’s imagine again that you want to sell agents that automate Customer Support requests. If you focus only on automating returns for small e-commerce businesses and need to hire a few CS agents to handle the non-yet-automated requests, you’ll have a significantly easier time than if you’re Freshdesk and try to automate the CS requests of multiple industries, problems, and use cases at the same time.

4/ Service markets are multiplefold bigger than software markets

Constraining the problem doesn’t necessarily mean you’ll build a small business. By automating jobs, a full-stack service business can capture HR budgets, not software budgets, and get much larger ACVs than SaaS incumbents. As an SMB, you’re spending 10–100x more on an accountant than on accounting software (usually 10–50€/month for Quickbooks/Xero but 1000€ to an accounting firm). This also means that full-stack AI-first service businesses can go after much smaller markets but still build huge businesses.

5/ These businesses will collect different datasets (and might build strong data moats)

We don’t have enough empirical evidence to prove it but it might be that the data (and the software) that needs to be collected (and built) to get the automation level close to 100% in a particular niche is very different from what is needed to get to 80% on a broader set of use cases. Only trying to get to this level of automation will lead to the collection of highly specific data on edge cases.

To that extent, SaaS incumbents might be in a good position to automate 80% of the requests but struggle to get to full automation. Performance curves of AI models are asymptotic. It is much harder to get from 80% to 100% than it is to get from 0 to 80%. The above provides an answer to the “data defensibility question”. Only by constraining a problem to a particular niche and focusing on getting as close as possible to 100% automation, a software company will manage to create a fully scalable service, while collecting specific data that might make them very defensible.

The best illustration of this point is Autonomous Vehicles. We saw awesome demos of AVs in constrained environments like racing circuits relatively fast. It’s taking much longer than expected to have AVs on the streets. Collecting data on edge cases is so much harder but this is (most likely) what is needed to have AVs in the streets.

6/ Distribution dislocation

One interesting point that Peter Fenton brings here is that we need to look at disruption through a technological but more importantly through a distribution lens. What brought the innovation of mobile apps is not mobile technology per se but the combination of mobile phones (the technology) and the app store (the distribution mechanism that made it possible for anyone to discover and download apps).

“Selling the work” might be a fundamentally new way to sell software. You don’t sell software anymore, you sell an (automated) service, the service level of which you guarantee. Service markets (like call centers, accounting, or legal services) exist already. The disruption comes from the way you’re delivering the service, not from the value proposition per se. To that extent, there’s no PMF risk anymore, there’s a “scalability risk”. To make it even more concrete, if you offer accounting services and manage to get some customers, you haven’t proven much (especially if you’re offering them at a lower price). The real test is still in front of you: can you deliver the service in a scalable way?

Let’s get into the risks/unknowns.

What’s harder/unknown?

1/ Getting to the right automation level without compromising the quality of the service

In 2018, Zetta introduced the concept of Minimum Algorithmic Performance (here). They define it as the minimum performance of the model that’s required to justify end-user adoption. In this context, these AI-first service businesses will have to guarantee and constantly monitor their service levels to avoid disappointing their customers. This will likely mean finding a balance between i) constraining the problem enough to get to the right level of automation/scalability, ii) hiring “humans-in-the-loop” to cover edge cases where models don’t perform well, and, iii) still finding a good enough value proposition and a large enough market.

2/ Balancing scalability vs. growth

In between paying for “humans-in-the-loop” and the GPU costs of training models and running inferences, the gross margin of these businesses won’t look like software businesses from the get-go. These businesses will also always be tempted to grow faster but might do it at the expense of scalability. Imagine running one of these service businesses. It becomes easy to sell (you sell a service that everybody wants, potentially cheaper) but much harder to ensure that what you do is truly scalable (automated by AI vs. performed by humans). In an industry (the VC industry) where money follows growth, we believe that we’ll need to find new ways of sequencing these businesses rather than simply favoring growth. Pennylane, which is now building accounting software in France, initially started as an accounting service with 20 accountants on payroll before realizing that they would never be truly scalable. They ended up divesting entirely their accounting service to focus only on software.

3/ Building and articulating value propositions beyond lower costs

AI systems will likely allow for a reduction in the costs of delivering services. To that extent, cost-conscious industries (like CS) might be better target markets than markets where trust vis-a-vis the service provider and quality of service are key success factors. Also, what about markets like consulting or legal services where brand and reputation matter a lot? It’s not clear to us that lower cost leveraging AI will be a winning value proposition in every industry.

Some examples

Building AI-first service businesses isn’t completely a new idea. You could argue that Autonomous Vehicle services like Cruise are an AI-first riding service. Before ChatGPT emerged, Point72 also raised $600M to buy high turnover low margin businesses with the aim to automate them with AI. More recently, General Catalyst announced that they were planning to buy a hospital to test new AI technologies.

On the earlier stage side, below is a short list of ideas/companies we’ve seen more recently:

This post is just a summary of thoughts we’ve collected speaking to companies and brainstorming on key success factors for this new generation of AI companies. We’re excited to keep on learning more and help this playbook mature.

Looking back, we got the timing somewhat wrong in 2017. It might very well be that the LLM wave is still not the right one to build fully automated, full-stack, AI-first service businesses. But our hunch is that there’s a chance that this is it.

If you do too, reach out to us!

--

--

Louis Coppey
Point Nine Land

VC @pointninecap, interested in / writing about VC, SaaS, and, Automation.