Hacker Newsnew | past | comments | ask | show | jobs | submit | geremiiah's commentslogin

IMHO, social media itself is not the issue. The issue is rather, why are teenagers glued to their screens? The answer is because they aren't doing something else that is social and physical. So if you ban their access to TikTok or whatever, they are still stuck at home, bored and glued to their screen. Other online entertainment will capture their focus. Before you know it you'll end up trying to ban the whole internet.

I think many people on HN grew up glued to an internet that wasn't trying to intellectually molest them for capitalist and political gains.

No, at the time I was growing up, the music industry was being blamed for corrupting the youth via explicit lyrics and music videos. And there was a whole big discussion around making movie ratings more and more detailed. It turned out that the movie rating media hue and cry all came from mostly from one conservatively funded think tank.

This social media ban looks very reminiscent and I think it is all about creating a surveillance state, controlling the population to only see images and video in a centrally approved way.


The only part of Meta I care about is the PyTorch team. Are those people also being affected by this?

a bunch of them already left....

People who are lucky in life never question their faith, because why would they? That's why Christians are happier. I grew up Christian, but I was not lucky in life. Christianity did fuck all to help me. Actually, I find more peace in my lack of faith now. But everyone is different.

TPUs are systolic arrays right? So does that mean that Google is using a hetreogenous cluster compromising both GPUs and TPUs, for workloads that don't map well or at all on TPUs?

I can't speak to what every team at Google does, but there are machines with Nvidia GPUs in Borg. However Google charges orgs internally for cpu/memory/gpu/tpu usage and TPUs are *way* more efficient in terms of FLOPS/$ than Nvidia GPUs, so there is a *huge* incentive for teams to use TPUs if they can, especially for teams operating large products.

What sort of workloads are you thinking of?

> This tech is 100% aligned with the goals of the 0.001% that own and control it

If AI is smart enough to replace the 99.999% it's also smart enough to replace the 0.001%.


That fact doesn’t prevent the 0.001% from continuing to control it.

Point is, if an AGI becomes powerful and capable enough of replacing 99.999% of humanity, the likes of Sam Altman and Elon Musk won't be able to control it.

An electrician with access to a circuit breaker will be able to control it.

This is called the AI Stop Button Problem. Computerphile has a great video on this (featuring Robert Miles) which explains why this is not a reliable solution to AI getting out of control. When the AI is smarter than all of humanity combined, the only real solution is for the AI to not get out of control in the first place.

If people are going to produce unrealistic sci-fi videos they should at least try to make them entertaining and not just lame.

The AI would have redundancy, both in terms of its power source and also because it can literally replicate itself and have multiple instances running all over the world. Also, an army of drones that you'd have to dodge just to go anywhere near any critical infrastructure.

It's hilarious that your think you know what "AI" would have when it doesn't even exist

It's only a little bit comforting that computers still live in meatspace when you consider something like an AI-controlled Metal Gear roaming around though.

2001 Space Odyssey presents a different scenario

It does exactly present that scenario, as Dave Bowman gains access to the circuit breaker (well, to the memory banks).

Most people don't believe AGI is reachable, or that any of these tech CEOs even want to reach it. So we are discussing the more likely reality

The 0.001% has a controlling stake in AI, so they're in the clear.

The 99.999% needs to assert their controlling stake in the technology. I don't know what this looks like. Maybe ubiquitous unionizing, coupled with a fully public and openly-trained LLM.


There are already several fully open source LLMs. You can start participating in those projects today.

https://www.bentoml.com/blog/navigating-the-world-of-open-so...


Are most of these distilled from other models? I'm talking about publicly owned and fully open foundation models, which will require significant government-level investment into GPU farms and training.

No chance of it happening in the US due to lobbying pressure, but maybe in a more civilized country... (unless a distributed SETI@home-type architecture becomes viable)


I don't understand your comment. The USA is the most civilized country in the world. And some of the LLMs linked above are fully open source.

The monkeys claimed ownership of the world's resources according to monkey law. I guess we are now subservient to the monkeys.

Yes, but that isn’t the question as long as those wealthy people control most of the system: companies aren’t going to lose executives, they’ll shed the jobs which they don’t respect. Someone wealthy does not need to accept a bad deal to avoid sleeping on thr street. It’s everyone who isn’t insulated who has to actually compete for work.

Besides the argument above, that an AGI powerful enough to replace 99.999% of humanity won't be controllable, there's also the economic argument: corporations, executives, all that means nothing if 99.999% are unemployed. Our economy is based on consumerism which will obviously cease to happen in a scenario where 99.999% of humanity is unemployed. The economic system would be so upended that ownership and such notions would become immaterial.

If we meet in the post-apocalyptic wasteland, but I have an android slave with a gun and you have nothing but a rusty spoon, it's going to be pretty clear who the android belongs to, and who it serves. The android also makes it likely that I will have a bunch of other nice stuff that you don't. Food and water, for instance.

This scenario is not meant to be taken literally.


I would worry that it won’t go quickly to 99.999% but instead would grind down different groups of people slowly enough that they’d be able to entrench their power: being a cop will be a growth job, people would be given state-sanctioned automation-resistant work like picking crops as a condition of receiving social benefits, the Republicans would more seriously dust off the previously-fringe proposals to restrict voting to property owners again, etc.

Setting people against each other is a time honored way for a small elite to control a large population.


I have given this serious thought over the years. I even have an unfinished novel exactly around that topic.

Energy. The key is controlling their access to energy.


IMO this is a common trap. Certainly there's no boundary of cognitive capability that separates capitalist elites from those below them in terms of an AI's ability to outperform them.

But that doesn't really matter when we talk about "replacement" because these people don't "do" they simply "own".

They're not concerned about being outpaced at some skill they perform in exchange for money...they just need the productive output of their capital invested in servers/models/etc to go up.


It's not about capability. It's about who "holds the key". And sure, many currently with deep pockets and pushing for AI will miscalculate and get pushed by the wayside. I think many people who are not in the 0.001% are miscalculating right now in HN.

What's important is that ultimately some small subset owns this, and it doesn't matter how smart they are, only that they own the thing and that it cannot be employed against them (because they hold the key).


No because the technology will be used against you.

LLMs are dangerous in other ways (LLM psychosis and false confidence has probably already caused negligent deaths). However, I don't think we are close to a terminator scenario.

At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.


>Is that bad? Idk.

There's no such thing as bad. It is necessary, though.


If you need an LLM to understand a paper you should not be a reviewer for said paper.


LLMs were used to produce the review, not understand the paper.


It's more like OpenXLA or the PyTorch compiler, that codegens Kokkos C++ kernels from MLIR defined input programs, which for example can be outputted from PyTorch. Kokkos is common in scientific computing workloads, so outputting readable kernels is a feature in itself. Beyond that there's a lot of engineering that can go into such a compiler to specifically optimize sparse workloads.

What I am missing is a comparison with JAX/OpenXLA and PyTorch with torch.compile().

Also instead of rebuilding a whole compiler framework they could have contributed to Torch Inductor or OpenXLA, unless they had some design decisions that were incompatible. But it's quite common for academic projects to try to reinvent the wheel. It's also not necessarily a bad thing. It's a pedagogical exercise.


I think the exactly opposite, if someone was able to build a framework that doesn't overly constrain the problem, and doesn't require weeks of screwing around with the build, integration of half baked components and insane amounts of boilerplate, that would be a fantastic contribution in and of itself even it didn't advance the state of tensor compilation in any other way.


>AI man camps

Anyone who studied Engineering or Computer science already knows what this is like, lol.


Interesting topic, but why am I reading an LLM generated summary?


> "If you’ve been following my recent posts on Metaduck, you know I spend my days building infrastructure for AI agents and wrangling LLMs into production"

Because LLMs users use LLMs for everything


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: