Beyond the Model Wars: Three Strategic Paths for AI Products

Let’s be real: we’re living through the wildest time in AI history. OpenAI, Google, Microsoft, and Anthropic are basically having a billion-dollar flex contest, each trying to build the most powerful AI model. And honestly? That’s awesome. These companies are creating incredible technology that’s changing everything.

But here’s the thing — not all of us have a spare $10 billion lying around to train our own GPT-5. The good news? You absolutely don’t need it. There are three really clever ways to build AI products that people actually want without trying to out-engineer Sam Altman.

 

Path 1: Ride the Wave (Building on Top of Big Models)

This is probably the most straightforward approach — take the amazing models that already exist and build something specific and useful on top of them.

How It Works

Companies like Granola figured this out perfectly. They took existing speech-to-text and language models and built the best possible meeting notes app. NotebookLM did the same thing with document processing. Cursor took it and made the best AI coding assistant. Chris from Granola put it perfectly:

“You get all the benefits from those models, and if you can apply it to the right use case, it’s really powerful.”

It’s like getting to use a Formula 1 engine without having to build the entire car from scratch. The foundation model companies get more users, you get to focus on making something people love using. Win-win.

What Actually Works

Here’s the secret sauce: frequency and stakes matter. A lot. As Pedregal notes:

“Low-frequency use cases that are non-critical are going to be eaten up by the general agents. But if it’s something that really matters — professional tooling where your performance really matters — then bespoke tools that are optimized for that use case are going to be way better.”

If someone uses your product once a month for something that doesn’t really matter? ChatGPT will probably eat your lunch eventually. But if people use it multiple times a day for something important to their job? That’s where you can win big.

Granola nailed this — people have meetings constantly, and missing important details actually hurts. So even though anyone could theoretically build a meeting notes app, Granola’s version is so much better at the specific thing people need that it doesn’t matter.

The “escalating effect” is pretty cool too. When GPT gets smarter, your product automatically gets smarter. You’re basically getting free R&D from some of the smartest people on the planet.

The Catch

The barrier to entry is pretty low. If you can use ChatGPT’s API, so can everyone else. So you win or lose based on:

  • How well you understand what users actually need

  • How good your product feels to use

  • How fast you can build and improve

  • Whether people actually stick around

 

Path 2: Go Small and Specialized (The Jujitsu Move)

While everyone’s obsessing over bigger and bigger models, there’s this whole opportunity to build smaller, specialized models that are actually better at specific things.

The Plot Twist

NVIDIA published a research recently showing that small language models might actually be the future for a lot of real-world applications. Turns out they’re:

  • 10–30 times cheaper to run

  • Way faster

  • Just as good (or better) at specific tasks

  • Much easier to fine-tune and customize

Think of it like this: you probably don’t need a massive pickup truck to commute to work. A smaller, more efficient car might actually be perfect for what you’re trying to do.

Why This Works

Instead of trying to compete with ChatGPT on general intelligence, you’re building something that’s absolutely incredible at one specific thing. It’s like being a specialist doctor instead of a general practitioner — you might know less about everything else, but you’re the best in the world at your thing.

The companies winning here aren’t trying to replace foundation models. They’re working alongside them. Small models handle the routine, specialized stuff efficiently, while the big models tackle the complex reasoning when needed.

Team Working

There’s another layer to this that’s particularly interesting: multiple small models can work together like a specialized team. Think of it like human society — instead of having one person try to be an expert at everything, you have specialists in different areas collaborating to solve complex problems.

For example, you might have one small model that’s exceptional at understanding user intent, another that’s great at data retrieval, and a third that excels at formatting responses. Together, they can outperform a single large model on specific tasks while being more cost-effective and easier to update individually.

Building Your Edge

This approach can create some real competitive advantages:

  • Your training data and methods become your secret sauce

  • It’s much harder for competitors to copy

  • You can get way better performance for your specific use case

  • The economics work way better as you scale

The investment is real but manageable — you need some ML chops and good data, but you’re not trying to recreate GPT-4 from scratch.

 

Path 3: Feed the Beast (The Infrastructure Play)

This might be the smartest move of all. Instead of building apps that use AI, build the stuff that makes AI better.

The Handshake Story

Garrett Lord had been running Handshake — basically LinkedIn for college students — for 10 years. Then he realized something: AI companies were desperate for high-quality training data, and he had a network of 500,000 PhDs and 3 million grad students.

So they started a data labeling business in January. Four months later? $50 million in revenue. They’re projected to hit over $100 million in year one.

That’s not a typo. They built a business bigger than many “unicorns” in less than a year by figuring out what AI companies actually needed.

Why This Is Brilliant

The AI models we use today are only as good as the data they’re trained on. And it turns out, to make models really good at advanced stuff, you need actual experts — not just anyone off the street. As Garrett explains:

“The models have gotten so good that generalists are no longer needed. What they really need is experts across every area.”

So while everyone else is building apps, Handshake is providing the fuel that makes all those apps possible. They’re not competing with OpenAI — they’re helping OpenAI build better models.

The Broader Opportunity

This isn’t just about data labeling. There are tons of ways to support the AI ecosystem:

  • Specialized training data for different industries

  • Tools for testing and evaluating models

  • Infrastructure for model deployment

  • Services for fine-tuning and optimization

The cool thing is that your success directly makes AI better for everyone. You’re not just building a business — you’re contributing to the whole ecosystem.

 

Picking Your Path

Each approach has different trade-offs:

Path 1 (Building on Top) is fastest to start but potentially easiest for competitors to copy. Best if you’re great at product design and really understand a specific user problem.

Path 2 (Small Models) takes more technical work but can create stronger competitive advantages. Perfect if you have ML expertise and access to good specialized data.

Path 3 (Infrastructure) might be hardest to get into but has the strongest network effects. Ideal if you already have relevant assets or relationships.

 

Here’s the Thing

We’re not in an era where everyone needs to build their own ChatGPT. We’re in an era where smart teams can build amazing products by working cleverly with the AI ecosystem that already exists.

The big tech companies are building the roads and highways. Your job is to figure out what awesome destinations you can create that people actually want to visit.

The question isn’t whether you should try to out-build Google. The question is: which of these paths makes the most sense for what you want to create?

 

Sources and Further Reading

Research Papers: Small Language Models are the Future of Agentic AI

Interviews and Podcasts: Chris Pedregal + Sam Stephenson: Making Meetings More Effective with Granola, Inside the expert network training every frontier AI model | Garrett Lord

Key Companies Referenced:

  • Granola — AI-powered meeting notes application (granola.so)

  • Handshake — Career platform and AI training data business (joinhandshake.com)

  • NVIDIA — Graphics and AI computing company conducting SLM research

 
Previous
Previous

The Future of Work: From Role Boundaries to AI-Enhanced Collaboration

Next
Next

Brian Balfour’s Latest Prediction—ChatGPT Is About to Become Every Company’s Growth Channel