> Maybe even more importantly: Anthropic wants to control what people do with AI—they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be.
What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not:
1. Use our Services in a way that infringes, misappropriates or violates anyone’s rights.
2. Modify, copy, lease, sell or distribute any of our Services.
3. Attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law).
4. Automatically or programmatically extract data or Output (defined below).
5. Represent that Output was human-generated when it was not.
6. Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services.
7. Use Output to develop models that compete with OpenAI.
Learning: Marketing and Legal team are not in sync with each other.
Sam's goal is to create viral products. This has always been his goal. While his interest in and passion for AI are almost certainly genuine, he famously hedged his bets before committing to OpenAI up to the point of being ambiguously fired from his previous job. ChatGPT was one of the greatest viral products of all time. His subsequent attempts to create virality haven't been working in any real, durable way.
Enter Claude Code. It went viral, not in the awe-inspiring world-changing way that ChatGPT did, but in a way that allowed it to decisively overwhelm OpenAI's advantage in some very valuable niches. It wasn't born out of a search for a viral product, but rather out of an intellectual obsession with safe, steerable AI systems -- an obsession that could've just as easily ended up producing no products at all. It's the kind of obsession that can't be replicated if the only motive is virality.
So yeah, this is going to bother him a lot. He's defending ads while his competitor's product is having its "iPhone moment." You know this is part of the raw emotional driver because he ends his defense of ads by talking about Codex app downloads and saying he "believe[s] Codex is going to win," implying that significant doubt exists that he feels the need to correct.
Oh dear. The ad doesn’t directly reference openai at all and for those of us out of the loop on all of this we wouldn’t have read it as such. I had no idea openai were considering this.
I think Sam would have been far better off to let this lie. I’m not at all sure of the nature of the ads coming in chatgpt but by responding so personally and aggressively to what appears to be sillyness feels like they hit a bit close to home?
Yeah I just don't think the average person is going to even get what they are talking about and they may not really care about ads in exchange for free-to-use ChatGPT. Most ChatGPT users are just using it as a search and summary tool. And they were seeing ads before via Google so what difference does it make?
While the Anthropic ads might be a hit amongst nerds, it won't compel normies to use Claude over ChatGPT, and in fact, it may just be damaging to AI adoption overall as it presents bad taste examples of how corruptible it all is.
Counterpoint being that Slack, for example, for all its faults, does not have ads in its chats.
If Anthropic is positioned as "thing for professionals to do professional work" then I think you just avoid this issue entirely. Fee for service. OpenAI trying to be the thing everyone is using won't work in that model, though.
Great idea! A tweet providing thoughtful commentary on a competitors ads will surely set the record straight, people always respect CEOs that are willing to publicly talk about touchy topics. Would you like me to draft one for you?
I have my LLMs tweaked so that they rarely if ever blindly agree with me. I guess that might not be how a CEO operates. But I really do prefer OPFOR LLMs I can argue with to help me sort my brain out.
Sam is an entrepreneur trying to play an industrialist game. He will lose. OpenAi will be sold to Microsoft on the profit side and control of the open source board will effectively be awarded to Elon. OpenAi had their chance to oust him, but his c-suit judo was too strong.
Of course the first version of the ad will be separated very clearly, just like how Google search started.
While I found the golden dating app the funniest anc creepiest, credit loan ad was the most viewed for a reason: it's the most profitable ad from the scenarios, and it will come in a few years.
I write all this as a mostly happy OpenAI subscriber (my only wish is still being able to have a sticky legacy o3 model as my default for my Pro subscription everywhere).
I don’t care for Sam Altman but so far I have been impressed with Codex and at least for our use right now its performing much better than Opus 4.5. Of course this will probably change again but the larger context window with Codex is god sent for us.
It feels like working with a professional. It just keeps churning until the work is done, and actually is pretty damn compact with token usage. Definitely lowest output tokens to value of the frontier models.
I honestly think OpenAI needs a new leader, they are so lost. They have no vision, it seems like they are just always chasing and not leading anymore.
Anthropic has a clear vision, they want to build the best agentic models so they can build products for humans to use like Claude Cowork, and Claude Code.
Google has a clear vision, they want to build the smartest model possible to give to consumers to use and to implement into their products.
OpenAI doesn't have any vision. GPT 5.2 alone has 5 different version that are all essentially the same. They are slow, take forever to do anything, and aren't smart in any area. They released Sora then just forgot about like everyone did. They released Atlas and forgot abotu like everyone did. They release GPT Image which honestly is probably the main reason they still have users they have.
They honestly might be in trouble, Oracle is fucking themselves over with the amount of debt they have to build out infrastructure for them. Microsoft and Nvidia are backing away from putting more eggs in the basket.
It actually no surprise they are trying to find every way to make money because if they don't they might be the first to fall.
Anthropic might not be winning the consumers, but it almost like you don't want to win consumers yet until you figure out how to make enough money to support them as Anthropic and OpenAI don't have business producing 100b in net income every year.
OpenAI's terms of service (https://openai.com/policies/row-terms-of-use/):
What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not:
1. Use our Services in a way that infringes, misappropriates or violates anyone’s rights.
2. Modify, copy, lease, sell or distribute any of our Services.
3. Attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law).
4. Automatically or programmatically extract data or Output (defined below).
5. Represent that Output was human-generated when it was not.
6. Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services.
7. Use Output to develop models that compete with OpenAI.
Learning: Marketing and Legal team are not in sync with each other.
reply