“AI is a bubble.” Something I hear all the time. Yes. It is. But that’s also a good thing. The internet was a bubble. The bubble bursting helped to set apart the pets.com’s of the world vs the amazon.com’s of the world. Now, as a small business owner, you’re faced with an onslaught of marketing about AI-powered this, disruptive that along with a never-ending list of warnings about not falling behind. Those warnings aren’t hype… but it can be hard to differentiate the hype from the things that will actually help you succeed.
You don’t have time to wait for the bubble to burst to sort the good from the bad.
You’ve got things to do now.
I’ve been thinking about this problem a lot.
I was on a call last week with the head of an organization evaluating one of my products. He asked good questions. Not combative ones: the kind that tell you someone is actually trying to understand what they’re buying instead of getting swept up in a pitch.
Then he asked the one that cuts through everything: “What does this do that I can’t already do with ChatGPT?”
I love that question. Most buyers don’t realize they’re allowed to ask it. (As an aside, whoever you’re evaluating had better love that question too. If they don’t, consider it a major red flag.)
The honest answer
A lot of AI products on the market right now don’t do much more than ChatGPT. That’s just true. If you’re disciplined enough, you can simulate a surprising number of “AI-powered” tools with a general-purpose model, some carefully curated files, and the patience to manage your own context window.
You can get pretty far that way. I told him so.
But there’s a progression that happens when you try, and look; it is valuable not only for him, but for you too. Let’s map this against your own experiences and you can tell me if I am hitting the mark…
Stage one: you figure out the prompts, the file structure, the workflow. You get good results. It takes some trial and error, but you land on something that works.
Stage two: you try to make it repeatable. You want to produce the same quality of output every time, not just when you happen to remember the right sequence of steps. This is harder than it sounds because the tool is intentionally open-ended. There are no rails. The burden of consistency falls entirely on you.
Stage three: you try to teach someone else to do it. Now you’re not just managing your own discipline: you’re trying to transfer a process to another person inside a tool that was designed to be maximally flexible. This is where most people hit a wall.
That’s the gap where opinionated products start to make sense.
What “opinionated” actually means
A well-designed AI product doesn’t exist because ChatGPT isn’t smart enough. It exists because the product encodes a process. I tell people all the time that AI helps accelerate implementation. It should not be used as a replacement for taste and judgment. An application (when done well) injects human taste and judgment in a rigorous way and creates a system as opposed to talking to a raw brain that can do tool calls.
It puts the workflow on rails so the user doesn’t have to reconstruct it from scratch every time. Instead of remembering which prompts to chain together, which context to include, how to structure the output, and how to evaluate whether the result is actually good: the system handles that orchestration.
The goal isn’t to make the AI smarter. AI is already incredibly smart. The goal is to make the outcome repeatable. Good products give that “brain” a better harness (tools it can call that are specific to the tasks you need completed) and a clean path to achieve your specific goals. Think of it as buying a specialized tool to work on a car. Sure you could get that door panel off with a flat head screw driver, but if you’ve ever actually tried after 2 hours of tinkering you likely see the value of spending the $20 at Auto Zone to get the tool that makes it trivial.
Same idea, but digital.
But that only matters if the process itself is worth encoding. Or if the tool actually saves time and stress.
Often, you can absolutely make do or even thrive without it. And that is not just good. It is great.
The market right now
I’ve been building AI products for a while now, and I’ve watched the market fill up with three categories of offerings.
The first category is products that exist because current models have limitations. They patch over gaps in memory, reasoning, retrieval, or UX. These can work for a while, but every new model release shrinks the gap they were built around. When the capability catches up, the product disappears.
The second category is thin wrappers. The product is basically a UI on top of an API call, maybe with a prompt template baked in. If you could achieve the same result in ChatGPT with ten minutes of setup, the product has no reason to exist long-term. The platform will absorb it.
The third category is the one that tends to last: products that encode real process, accumulate useful context over time, and get better when the models get better instead of becoming irrelevant. The AI is a component inside a larger system. The system is what you’re actually buying.
Sam Altman of OpenAI has talked at length that this is the type of product that will last. I think maybe most people thought he was being arrogant in saying that models will just continue to get better. But in reality, I feel like this was him at his most earnest. More vendors would do well to heed that advice.
The problem is that from the outside, these all look the same. Clean landing page, “AI-powered” in the headline, a demo that looks impressive for about sixty seconds. You can’t tell what’s underneath until you ask.
How to tell the difference
I told my potential customer something I don’t think enough vendors say:
“If you can do what you need with ChatGPT, you should just use ChatGPT.”
I meant it. I specifically told him not to buy my product if he felt the free tools covered his use case. That wasn’t a “withholding” sales strategy. Not a drop of defensiveness. It came from a place of genuine desire to help this prospect make the best buying choice for him.
Because look… If all you need is RAG over your documents, NotebookLM is free and legitimately good at that. If you need a research report, Deep Research from OpenAI is hard to beat for a general-purpose tool. If you need help writing emails or summarizing meetings, the native tools inside the platforms you already pay for are probably more than fine. In general, we’re living in a world that would have been considered magic just 3 years ago. We’ve barely had time to adjust to what is possible with the baseline tools at our disposal, let alone fully integrate something new.
Don’t buy a specialized product unless it gives you something that’s genuinely hard to reproduce on your own. Be skeptical. As someone who stands to make money off your lack of skepticism I’ll say it again.
Be skeptical.
Here’s a simple test. Before you buy any AI product, ask yourself one question:
“What would it take for me to build this workflow inside ChatGPT?”
If the answer is “not much,” save your money.
If the answer involves real time, domain knowledge, and the kind of discipline you’d have to maintain every single time: the product might be worth paying for.
The test I use on my own work
I apply this same test to everything I build. If a new model release would make one of my products unnecessary rather than better, I’ve built the wrong thing. That concept from Sam Altman I talked about earlier lives rent free in my mind.
And that’s the dividing line most people miss. The products that survive aren’t the ones that compensate for what AI can’t do yet. They’re the ones that organize what AI can do into something repeatable and connected to real work.
The AI getting smarter should be good news for the product, not an existential threat.
One more thing
There’s a version of this conversation that’s uncomfortable but worth having. The AI market right now has a lot of people selling confidence they haven’t earned. They jumped on the train because the technology is exciting and the demand is real, but they didn’t bring expertise or a system that produces measurable results. They brought a landing page.
Some of them are well-intentioned. Some of them are not. Either way, when a buyer gets burned by one of these engagements, they often become skeptical of the entire category. That makes things harder for everyone doing serious work.
The best defense against that isn’t calling people out. It’s giving buyers better questions to ask. If more people walked into AI vendor conversations with “What does this do that I can’t already do with ChatGPT?” as their opening question, the market would sort itself out a lot faster.
That’s really all I wanted to say. Ask better questions. Expect honest answers. And if a vendor can’t explain what their product does beyond “it uses AI,” keep walking.