I don't follow Anthropic closely enough to know what claims its CEO has made, but it is factual that Altman is a pathological liar. You can observe this for yourself by reading and listening to the things he says and then comparing them to reality. We have years of evidence to look back on. The chasm between Altman's reality and everyone else's is so large and so well-known that it was one of the chief factors cited by the board when he was fired.
(I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)
I mean.. kinda everything about Mythos for example? Anthropic has a good product, but they also pretty consistently say some stupid ass shit if you're being generous, and blatant lies if you aren't
> WITHOUT RAG
> "I don't have reliable information about a colony called Ares Base. As of my > training cutoff, no such Mars colony has been established..."
Oh we must have lived in a parallel universe then if this is a "without rag" textbook example.
They clearly do include "some random student" as the data can be shared with others from the eligible research group which are almost always university students who have zero clue about itsec.
I'm curious – in which context? I've worked on NIH-funded grants in academic medical centers, throughout the research lifecycle, and I've seen how both stringently data management plans are vetted, and how annual IRB certification drills the basics even into the oldest tech-phobic investigators.
That being said, I may be as pessmistic as you are: I don't think people right now grasp how standards for deidentification may no longer be enough, and how easy and automated deanonymization changes everything. Unfortunately, cuts to federal science agencies means that I doubt any well-informed guidance will come soon.
More seriously, in a multi-agent setup the per-token cost matters less: a bit of Claude, a bit of Codex, a bit of Gemini-CLI, ... No single model carries the full bill, and having three different training sets catches more "green tests, wrong code" than any single xhigh pass would. Even at 10x per token, one well-placed Opus in the reviewer seat beats one full Opus session on everything.
I suspect a lot of people are like me. They got into this at the $20/month level individually to check things out. I'm not stressing things out, so I haven't moved up, but the moment I bump into a limit, I'll pull the trigger by default. Until then, I'm the sleeping dog, and you should let me lie.
Well, Anthropic decided to kick me. Now, I'm investing the time to figure out how to use the "open" and "Chinese" models assuming that Anthropic is about to screw me. Once I switch, Anthropic is going to have to demonstrate significant improvements over what I'm now using to get me to even consider them again.
I know of 4 companies that are already starting to stomp down on the AI whales using the "$1000 per day". There was the cost of the entire AI usage of the company and then there was the cost of about a half-dozen people who dwarfed it individually.
So, we've established a hard upper ceiling for what AI can extract per user at roughly $100K and more realistically at $10K per year. Basically, if using the AI costs the same as a human salary, it's going to get pushback. I mean, the whole point was to get rid of those pesky human salaries, after all.
So, there are about 2 million-ish software jobs in the US? It's more than 1 million but a far cry from 10 million. So that pencils in at $20 billion in the US per year total? That means that if an AI company literally won all the US software programmers, it would be worth max $200 billion to be bought out (10x revenue).
Now how much investment have the AI companies taken? Yeah, roughly that. And investors are going to want quite a bit more than that back.
Even if they had zero delivery costs, the AI companies are cooked long term. The moment your number bumps into "All the X in the US/World", you've got a problem.
Short term? Greater fool theory applies. And there appear to be a lot of them.
And all this is before we start getting into people exploring the open models. Most people were like me; we started on something like Claude and just stayed put because it was straightforward. Now that we've been kicked, we'll start looking at the other options.
There is an export button on Jira. https://youtu.be/-wGRKzYmA7o?t=92 was what I used. For the workspace docs there is also an export button that can export all the documentation for the project(the export would be in HTML). I then used a simple script built with an LLM to convert all of it into markdowns.
I know of a company that's stuck on the datacenter edition, because they aren't allowed by some customers to store their data in the cloud. I can't imagine how much they must pay for that.
Until they finish evaluating competitors, and eventually migrate to .... something, they are completely stuck. Jira is at the heart of all of their workflows and they cannot and will not move to cloud. This was an Atlassian partner, but they got screwed over on that part as well.
reply