Are You Ready to be Red Pilled About AI Models (hint: they own everything)
Sarah Friar, CFO of API said the following in her blog post: “As these systems move from novelty to habit, usage becomes deeper and more persistent.” It almost sounds like they are getting us addicted.
She continues: “As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path.” That sounds a lot like they intend to consume your prompts and data, and then they get a “share in the value created” with it. Any corporate attorney who read this would start screaming obscenities and throwing things, immediately followed by a call to IT to block the OpenAI IP address.

Here’s a current day scenario that will blow your mind. Imagine you're a property developer scouting the optimal location for a strip mall in Houston, TX. You input proprietary data into an AI model: site-specific prices, financial projections, development cost estimates, local demographics, traffic patterns, and competitive analyses. The AI processes this, connects the dots, and delivers a comprehensive recommendation for the ideal location based on your proprietary data and insight embodied in the prompt. Now you’re no fool, you opted out of sharing your personal data with the model…didn’t you? Unfortunately for you, the AI company reserves the right to train their model on its response! But that response encapsulates all of your data, expertise and insights. Your 25-years in property development. Along comes “Brand New Property Guys, Inc.” with no experience in property development, but they know AI. They prompt the AI model that trained on your response and it gives them a functionally identical plan. They jump into action and purchase the ideal location. Fortunately, they’re nice guys, so they offer to sell it to you at a 25% premium.
When you prompt AI, you provide your data, experience driven insights, plans, designs, everything. Its response encapsulates all of this and more. The terms of use for these models gives you the ability to opt out of training on your prompts and data, but opting out of training on their responses is limited or unavailable.
I expect it to get worse, here’s why. AI companies are spending massive amounts to build ever smarter models as they chase Artificial General Intelligence (AGI). Content, insight, and experience, what you might call your value-add or intellectual property, makes them smarter. And somehow they need to earn a return on that money. As the OpenAI CFO said, they will be looking to do value-based pricing, and we provide the value. They learn from the global we, and then want a piece of what you earn using it. I believe that explains this trend, OpenAI and Meta’s Llama started as open source projects…until they went the closed approach to maximize ROI.
Now that you’ve swallowed the AI red pill and I’ve shown you how deep the rabbit hole goes, you’re frantic to replace this AI addiction…I’ve got the answer. Open source AI models. There are open source models you can run inside your firewall, assuming you can afford it. Qwen and DeepSeek are open source, but they’re from China, so I would trust them about as much as I trust gas station sushi. When Meta decided to go closed source with their Llama model, NVIDIA decided to carry it forward under the brand Nemotron. They’ve adopted a maximum open source ethos: making the code, weights, and much of the training content open source. There are other open source models doing pioneering work like Kimi K 2.5 (also Chinese) with its agent swarm approach to parallelizing work, a complement to DeepSeek’s Mixture of Experts (MOE). This is the way.
An open model running inside your firewall is a great solution if you’re a big company with the people and hardware. You can run it internally, protect your IP, and get regular updates for free! But what about the rest of us broke bros? AI is coming to your phone. Soon open source small language models (SLMs) like Microsoft’s Phi-4, which are distilled (trained) from LLMs and almost as good, will be standard on your mobile phone. Built-in inference chips will make them fast and energy efficient. They will route advanced requests to public models, but only as needed. You own and control the data…subject to the iOS or Android terms of service, of course.
The large public AI models will get better by training on your content and insights. And they may start splitting the revenue from their (our) insights. In the meantime, there will be a rush to open source models that are under your control. This trend will be led by large companies, but your phone will soon follow. Technology and capitalism save the day.
I’ve referenced the 1999 Matrix in this post. Did you remember that the bad guys were called “Agents”...spooky. But, if you want to read about good agents, here’s an article similar to this on the agentic AI security issues…