the wrapper isn't the revolution
When people say they're "using AI," what they usually mean is they're typing prompts into ChatGPT. Maybe they're talking to Claude. Maybe they've set up a custom knowledge store inside someone else's application and feel pretty good about it.
Here's the rub: those applications are wrappers around the revolution. They are not the revolution itself.
The revolutionary technology is the large language model—the engine underneath. And the skills you need to leverage that engine are not the same skills you need to navigate someone else's UI. Not even close.
kleenex and tissues
The reason Anthropic and OpenAI are winning so hard right now is because they've become the face and brand of AI. People are conflating Kleenex with tissues. The brand is so prevalent that they don't see past it to the actual product it represents.
It's like saying Chrome is the internet. Or Netscape is the internet. Or Dreamweaver is HTML. These are tools that give you access to the underlying engine—but they are not the engine.
And this matters because when you confuse the wrapper for the technology, you limit yourself to what the wrapper allows you to do. You become a user of someone else's interface instead of a builder on top of the most powerful technology shift of our generation.
Your task is not to become a research scientist. Your opportunity is not to learn how to tweak hyperparameters or set up reinforcement learning pipelines. That path is too long, too laborious, and frankly—it's a little too late for most of us.
The real leverage is understanding how to build your own wrappers. How to plug into these models in the right places via APIs. How to orchestrate the pieces yourself.
from lego blocks to fluids
Here's what most people miss about what these models actually are.
LLMs are not chat boxes. They are non-deterministic tools that process data based on its meaning—not based on a strict structural protocol like JSON or HTTP or any hard-coded format. This is the paradigm shift. We've moved from building with rigid blocks to working with fluids.
Think about that for a second. Every technology we've built software on top of has been deterministic. You send a request, you get a predictable response. You define a schema, and the data conforms to it. The building blocks were Lego—precise, interlocking, structural.
Now you have something that behaves more like water. It flows. It adapts. It interprets. Your job is to build the right container—the right piping—for that fluid to flow through. The right prompts, the right context, the right constraints.
And yes, you're still going to need Lego blocks. You still need the best practices of software engineering. APIs, databases, authentication, deployment—none of that goes away. But the medium has fundamentally changed. You're no longer just assembling blocks. You're shaping flows.
build the pipes
So why the rant? Because this is a call to action.
There's a window of time right now—a special, finite window—where the people who understand how these models work and how to build on top of them will create enormous value. The winners are going to be the ones who build better applications, who understand which models to use when, who optimize for cost-effectiveness and efficiency.
Look at Granola. It's an incredibly useful tool. It's not rocket science. All it is, really, is a notepad that detects when you're in a meeting, transcribes the audio, summarizes the conversation, and organizes everything into folders. Simple. They have some pre-filled instructions for how summaries should be generated. They let you update or change things, ask follow-up questions.
None of this was possible two years ago. All of it is possible now because of the models underneath. The application itself is straightforward—the power comes from what it's wired into.
There are going to be so many opportunities like this. Applications that are dramatically more useful than anything before, built by simply wiring up parts of a workflow to an LLM API call. Not by inventing new AI. By plumbing existing AI into the right places.
10x your reach, not your speed
The biggest trap I see is people who think the play is efficiency. "I can do the same amount of work in one-tenth the time because I copy and paste my job into ChatGPT."
That's thinking too small.
The real winners won't 10x their speed. They'll 10x their reach. They'll build products and tools that capture so much more of the market—because what used to require a team of ten can now be built by two people with the right model integrations.
There's just fundamentally so much more we can do now. We are powerfully Pareto-principled. The leverage is unreal.
If I told you in the 1970s how the internet would revolutionize everything—how paper documents would become irrelevant, how people would hail taxis from their phones—you wouldn't have believed me. Honestly, I wouldn't have been able to predict it either. These LLMs are that kind of hard to conceptualize. The applications will permeate everything: research, software, search, creative work, operations.
New billion-dollar companies will be built on this. Not by the people who mastered someone else's chat interface—but by the people who understood the engine and built their own machines around it.
What an incredible time to be alive.
