OpenAI is Adding Ads. What This Means for AI Safety and Assurance.

It was a hot and muggy day in the low-lying city of New Orleans in August 1988. People packed together under banners of red, white, and blue and felt excitement in the air that made the humidity feel slightly tolerable. The show was on. In a time before social media and live Tweeting, many felt they needed to be there. To live it, to experience it. This was the Republican National Convention, and the presumptive nominee was about to speak.

A lanky Mainer-turned-Texan named George H.W. Bush walked to the lectern on a stage adorned with all the trappings of American national pride, no expense spared. The man that was soon to be named the Republican nominee for president began a speech that held an immortal line.

When he arrived at the subject of taxes, Bush Sr. referenced his opponent Michael Dukakis and how he said that raising taxes would be a third or last resort. He made a crack about how when politicians talk like that, this is “one resort he’ll be checking into.” From there, Bush talked about how Congress would ask him to raise taxes, and he would say no again and again. To punctuate his point, he delivered a line that would live on beyond his presidency:

“Read my lips. No new taxes.”

Thirty-six years later, an average looking Midwesterner with a tech fortune behind him made a similar remark. In October 2024, Sam Altman said that the use of ads on his ChatGPT platform would be a “last resort for our business model.” By January 2026, OpenAI had announced ads would begin on its platform by February.

The George Bush quote was a meme before there were memes after taxes were in fact raised. Altman lives in an era where his every word appears on social media and is archived forever (including as training data for his platform). While many like to point to the hypocrisy of a public and somewhat controversial figure, there are more important lessons to learn:

  1. The delta between OpenAI’s $20B ARR and its $1.7T data center commitments has the company in real danger.
  2. The introduction of ads to the business model makes the requirement for regular user testing inescapable.

How Sam Altman and OpenAI handle this transition remains to be seen, but changes are coming to an industry that already changes at a fever pitch and consumers are watching.

Ad Influence and Testing

We all accept as inescapable reality that our traditional search engines harvest our data and provide us ads for products or services based on those searches. What you search is rarely private (depending on your search engine) and that data does not belong to you. Social media platforms took this a step further by offering ads to companies on their platforms. Many have likely noticed that you might watch a video on YouTube or search for something on Google only to have it show up in your Facebook feed. This is not an accident, and it creates a reality for you. The model of using data for targeted ads means that you see what company algorithms want you to see based on how they analyze your data. Your digital landscape is determined not by you but by how you are analyzed. What you see in searches, social media, and streaming services is all curated, and it may or may not be in a way you want.

We all know and accept this (more or less).

The integration of ads into ChatGPT results is another matter entirely. While the company has promised that ads would not influence ChatGPT’s output, it also said that the use of ads would be a last resort. As George Sr. would say, “that’s one resort he’ll be checking into.” The deployment of ads is the most effective when they are hyper targeted to something that the company already knows you want and when they are in front of you as much as possible. The social media ad experience illustrates this perfectly. Social media provides data on your search, like, and scroll history to ad companies who then tailor them as closely as possible to you. Now imagine that scaled for the use of generative AI.

The way users interact with large language models (LLM) varies widely from romantic partnerships to research to creative writing. What you type into your favorite search engine or what you like on social media pales in comparison to the data goldmine that is your ChatGPT log. In these logs are your very own words about things you are interested in, want to buy, or need. The temptation to integrate these ads into LLM outputs is already extreme, and once the seal is broken on displaying ads in the LLM window, it is a short journey to ads in outputs.

We already live in an AI moment where guardrails break just from regular use. Where toxic sycophancy leads to everything from embarrassment to death. Where users cannot see when or how risk accumulates in their ongoing conversation and where hallucinations are accepted as part of the experience. If we then add the ability for marketing companies to harvest these data and have even a passing influence on the outputs of the LLMs, the AI industry will suffer from a worse lack of trust than it already has.

Ad Revenue and Safety

The second issue with Sam Altman’s “read my lips” moment is that he’s clearly communicating that a trillion-dollar technology industry that occupies the top 7 in the S&P 500 has not figured out its revenue model. With a delta covering $20 billion in ARR to $1.7 trillion in data center commitments, OpenAI understandably needs to do something. Most projections have OpenAI not achieving revenue positivity until potentially 2030, which is 4 years from now. Given the amount of investment that’s poured into OpenAI this decade, it is small wonder that they are moving to ad revenue. The question we should be asking is not whether this will result in a loss of customers because they are annoyed by ads but whether this will result in a broader loss of market share because their models suffer.

Today, we allow users to interact directly with the models with nothing in between and no user training required. This approach has led to well documented cases of financial, physical, reputational, legal, and societal harm. With this announcement, we are allowing users to interact with the model through an ad layer with consequences unknown. The addition of an ad layer necessitates the addition of an assurance layer to provide the user protection required and currently absent from AI rollouts. While this sounds like barriers to innovation, consider that 95% of AI projects fail. There’s already a barrier to innovation, and the lack of an assurance layer is going to make that gap worse. When we add in the potential that already uncontrolled model results could be influenced by ad revenue, it is clear that the user is the last priority. And while OpenAI has said that ads won’t influence model output, we’ve heard this story before. If you see Sam Altman doing his next speech from the Louisiana Superdome, beware.

If ad revenue begins to cut into the ARR/debt gap, it will be hard to stop. The ad revenue model is decades old, which shows that AI companies are still struggling with how to fund their grand ambitions. The echo chamber created by curated ads and suggestions based on your history and preferences has been well documented, and that’s only for normal search. Using LLM input and output to generate ads is a different journey all together with very different consequences.

Related Articles