> Yeah, pressing Ctrl-W accidentially is a pain sometimes ... but Ctrl-Shift-T in Firefox is a godsend.
Fun fact: despite having absolutely no menu entry for it, and I believe not even a command available with Ctrl+Shift+P, Vscode supports Ctrl+Shift+T to re-open a closed tab. Discovered out of pure muscle memory.
And they hired a LinkedIn business idiot to run the new organization - so the aim is for an infinite growth tech startup in terms of governance, despite the technical legal status of non-profit. It shows in the language they use in the announcement, too ("improved financial viability in the long run")
OpenAI shows exactly how well that works and what that kind of governance does to a company and to its support of science and the commons.
> "This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
I'd love to be able to actually articulate what makes AI writing read like AI writing. A few of the common tells come to mind (contrast construction, hyperbole, overuse / wrongly used em-dashes, etc). The above quote doesn't have any of that, and yet it certainly feels AI. The first sentence (both what it says and where it's placed) suggest AI to me. But, I couldn't quite tell you why.
Before AI this style of prose was called "thank you for coming to my TED talk", with a little bit of "LinkedIn broetry". Confident assertions and pat explanations about truths that will make you a better person upon internalization; a pop psychologist convincing you of an unintuitive and surprising new idea about how the universe works that catches you off guard but then turns your perception on its head and revolutionizes the way you see the world. Contemporary marketing speak of a particular "coolly subverting your expectations and injecting the truth straight into your veins" flavor.
It is a style that AI (intentionally?) emulates for sure, though the "regression to the mean" and general vagueness seems to be what really separates the classic TED talk/puffy blog from AI. Humans like specific examples and anecdotes, AI fails at making those.
I think the main tell is that it says basically nothing, it reads like a human that is paid per word. Humans prefer easy to read articles that doesn't hide the point behind such fluff, so there is no reason to do it except just to spam words.
that's essentially it. But not only that, we learned to distinguish things written by humans for humans, and things written by humans (paid by the word) for SEO. LLMs tend to produce text that would be great for SEO, so it stands out as not for humans
Wikipedia has an excellent article about exactly this [1], in their editor information section. There's a section called "Undue emphasis on significance, legacy, and broader trends" that provides some examples:
>Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...
Once I read this, it started sticking out to me all the time.
I like the take on "undue emphasis on significance." To me, that's such an obvious tell. That's actually an old pre-LLM tell, we just used to call it "pretension." Once we get into long lists of specific words, it feels like we're getting into rules. You can't use this or that word cuz LLMs do. That's crazy problematic. It has to be about the way the emphasis and the overuse of certain words in a single piece reflects inauthenticity. But, eff if I'm gonna stop using "significance" cuz some LLM does.
I can not stand that I'm expected to adjust my use of em-dashes because LLMs use them (incorrectly, typically). It brings up all these feelings from my younger punk / indie days when normies would get into a band we were into, and then we were expected to not like that band anymore. Since then I've tried to abide by what I call the Farting Billion Principle. People shouldn't have to change their ways everytime a billionaire farts.
> The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.
And AI is still a product primarily aimed at investors and not consumers.
> Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Other reporting says this is very much the case. Stargate barely has some of the land cleared, but the buildings were supposed to be finished and have GPUs installed over the course of 2026.
There's also the indicator of Nvidia giving out billion-dollar deals to other companies such that they could commit to buying even more Blackwells to keep production going. The chips from those new deals don't have anywhere to go, everyone already spent their cash on getting shipped chips that they're still installing today (apparently some are even in warehouses)
Really? Because what TeX did was make it possible to write "proper" books or formal texts via a computer - that was the whole point
A 16th-century formal book like this would be the gold standard to replicate if you want to make "serious" texts. And yes, in scientific literature, the "serious" text is a narrow target and far narrower than you might expect from the possible variation in a handmade artisanal work. Mostly because when everything is "custom", standardization and regular structure is exceptional
> and 99 of them send him to the battlefield by himself, saying "good luck buddy, let us know how it works out?"
>The army was going to be reduced by a factor of 100, and two tiny armies were going to face off while the majority of men of fighting age were going to sit at home and paint landscape paintings? Really?
Well, for a time greek city states did fight pretty much like this. Small armies of hoplites were raised outside harvest season, went out, fought almost show-battles with very few casualties, and tribute changed hands based on the results. Everyone went home for the harvest.
I believe there's even instances where a battle wasn't fought at all in favour of two appointed champions dueling (origin of the popular fiction trope)
It didn't last, but for a time the greek city states had a kind of equilibrium with relatively few resources (or people) spent on war.
> Well, for a time greek city states did fight pretty much like this. Small armies of hoplites were raised outside harvest season, went out, fought almost show-battles with very few casualties, and tribute changed hands based on the results. Everyone went home for the harvest.
This is a view held by a small group, but is in no way the accepted view of it for historians. See the link for a blogpost of a military historian talking about the orthodox and heterodox schools of thought on this.
It's useful if your integration work takes some time - easy to run into with open source.
Imagine you have multiple contributors with multiple new features, and you want to do a big release with all of them. You sit down a weekend and merge in your own feature branch, and then tell everyone else to do so too - but it's a hobby project, the other guys aren't consistently available, maybe they need two weekends to integrate and test when they're merging their work with everyone else's, and they don't have time during the weekdays.
So, the dev branch sits there for 2-3 weeks gradually acquiring features (and people testing integration too, hopefully, with any fixes that emerge from that). But then you discover a bug in the currently live version, either from people using it or even from the integration work, and you want that fix live during the week (specific example: there's a rare but consistent CTD in a game mod, you do not want to leave that in for several weeks). Well, if you have a branch reflecting the live status you can put your hotfix there, do a release, and merge the hotfix into dev right away.
Speaking of game mods, that also gives you a situation where you have a hard dependency on another project - if they do a release in between your mods releases, you might need to drop a compat hotfix ASAP, and you want a reflection of the live code where you can do that, knowing you will always have a branch that works with the latest version of the game. If your main branch has multiple people's work on it, in progress, that differs from what's actually released, you're going to get a mess.
And sure you could do just feature branches and merge feature branches one by one into each other, and then into main so you never have code-under-integration in a centralized place but... why not just designate a branch to be the place to do integration work?
You could also merge features one by one into main branch but again, imagine the mod case, if the main code needs X update for compatibility with a game update, why do that update for every feature branch, and expect every contributor to do that work? Much better to merge a feature in when the feature is done, and if you're waiting on other features centralize the work to keep in step with main (and the dependency) in one place. Especially relevant if your feature contributors are volunteers who probably wouldn't have the time to keep up with changes if it takes a few weeks before they can merge in their code.
If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).
Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever
reply