Hacker Newsnew | past | comments | ask | show | jobs | submit | Chance-Device's commentslogin

I think people just shouldn’t be burned alive.

I haven’t paid any attention to the mission, and there’s something about the framing of this article that I don’t like, as if it’s talking about a soap opera or reality TV or something. It just rubs me up the wrong way.

I agree. Even though I thought this mission was interesting, to me the article massively overstates everything. NASA and the crew is SO amazingly competent, the world in recent years is SO totally devoid of competency, everyone has been thirsting for the sense of AWE that we are ALL feeling (or should be feeling now, let me list the reasons!), etc.

To me, this was irritating. True competency and things that inspire real awe encapsulate “res ipsa loquitur” — they speak for themselves. Having some internet influencer try to hype me into getting awed, and implying that “we all” are feeling a certain way as she channels our collective zeitgeist is tiresome.

And personally, IMO although the mission was nice, it wasn’t groundbreaking technically or particularly awe-inspiring.

Ironically, I left feeling a tiny bit disappointed: if everyone is truly thinking this mission is the height of awesomeness or competency, we have a low-ish bar.

I bet that when the old-timers with their starched white shirts, pocket protectors, and horn-rimmed glasses that did the 60s missions got together to watch 2026 Artemis they privately had a good laugh about how little state-of-the-art has progressed.


For what it’s worth Dan, you’re probably the best moderator I’ve ever encountered, and without you HN likely wouldn’t be worth visiting. As it is it’s one of the best places for online discourse. That’s directly because of you and your efforts.

It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.


Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?

If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.


Maybe HN is particularly upset because they feel targeted, given that overpaid tech executives have been giddily making the claim that programming jobs will disappear any minute now. What makes it even worse is that it's very obvious that said tech executives haven't programmed in over 10 years, if ever, and don't know anything about the technology they are selling. They are putting jobs at risk purely for the sake of personal enrichment.

This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.


Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.

I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.

We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.

We’ll lose these jobs and there will be no super abundance at that point, and not even government support.

There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.


It is not impossible to think that many people will just be served an UBI and don't expect much more in life, after all, if we have AI+Family+Housing+Food (assuming gov robots would take care of providing us free food in some form), I bet millions of people would be contented with it.

PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).


Seeing as how austerity governments campaigned on reducing social benefits and achieved considerable success over the past few decades, I don't see how your solution consisting of granting people even more social benefits will ever happen. Unless there law and order is about to break down, there is no reason for the rich to leave all of that money "on the table".

It's more likely that the people are just left out to dry instead of getting UBI.

You’ve addressed a different question, which is how satisfied with life will people be post scarcity. That’s a fine conversation to have, but it’s not the one I was having. My point is: how do we get there?

It made me kind of angry when I saw Dario repeatedly claiming that AI would be taking all the programming jobs any minute now. His company supposedly is working for a better future, but he's giddily talking about something that could cause millions of people to lose their homes if it were true.

Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.

I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.

I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.


You can look up what tends to happen when human labor isn't needed anymore by reading about the resource curse - that one is also about not needing human labor. Only the least corrupt countries seem to be able to resist it. None of these countries have a very large population, so chances are that you don't live in one of them.

a one bedroom and processed food sounds frickin amazing sign me up

It's not surprising, Dario is an absolute ghoul. Exactly the same as Altman, peas in a pod.

We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.

Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.

Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.

So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.

I don’t know if this answers your question, but it’s what comes to mind on the subject for me.


I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.

This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.

Generous of them, really.


No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.

Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.

By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.


It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.

Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.

What law exactly are you suggesting needs to be changed? How is this any different from what already happens right now, today?

Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).

That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.

Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.


So it's a bit as if Linux Organization told its contributors you can bring in infringing code but you must agree you are liable for any infringement?

But if a lawsuit was later brought who would be sued? The individual author or the organization? In other words can an organization reduce its liability if it tells its employees "You can break the law as long as you agree you are solely responsible for such illegal actions?

It would seem to me that the employer would be liable if they "encourage" this way of working?


It’s a foreseeable outcome that humans might introduce copyrighted code into the kernel.

I think you’re looking for problems that don’t really exist here, you seem committed to an anti AI stance where none is justified.


A human has to willingly violate the law for that to happen though. There is no way for a human to use AI generated that doesn't have a chance of producing copyrighted code though. That's just expected.

If you don't think this is a problem take a look at the terms of the enterprise agreements from OpenAI and Anthropic. Companies recognize this is an issue and so they were forced to add an indemnification clause, explicitly saying they'll pay for any damages resulting in infringement lawsuits.


> Right now it's very easy not to infringe on copyrighted code if you write the code yourself.

Humans routinely produce code similar to or identical to existing copyrighted code without direct copying.


They don’t produce enough similar code to infringe frequently. And if they did independent creation is an affirmative defense to copyright infringement that likely doesn’t apply to LLMs since they have the demonstrated capability to produce code directly from their training set.

You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.

On independent creation: you are conflating the tool with the user. The defense applies to whether the developer had access to the copyrighted work, not whether their tools did. A developer using an LLM did not access the training set directly, they used a synthesis tool. By your logic, any developer who has read GPL code on GitHub should lose independent creation defense because they have "demonstrated capability to produce code directly from" their memory.

LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case). Training set contamination happens, but it is rare and considered a bug. Humans also occasionally reproduce code from memory: we do not deny them independent creation defense wholesale because of that capability!

In any case, the legal question is not settled, but the argument that LLM-assisted code categorically cannot qualify for independent creation defense creates a double standard that human-written code does not face.


> You have shifted from "very easy not to infringe" to "don't infringe frequently", which concedes the original point that humans can and do produce infringing code without intent.

Practically speaking humans do not produce code that would be found in court to be infringing without intent.

It is theoretically possible, but it is not something that a reasonable person would foresee as a potential consequence.

That’s the difference.

> LLM memorization/regurgitation is a documented failure mode, not normal operation (nor typical case).

Exactly. It is a documented failure mode that you as a user have no capacity to mitigate or to even be aware is happening.

Double standards are perfectly fine. LLMs are not conscious beings that deserve protection under the law.

>not settled.

What appears to likely be settled is that human authorship is required, so there’s no way that an LLM could qualify for independent creation.


And that's not an infringement. Actual copying is the infringement, not having the same code. The most likely way to have the same code is by copying, but it's not the only way.

In this case, the "fall guy" is the person who actually introduced the code in question into the codebase.

They wouldn't be some patsy that is around just to take blame, but the actual responsible party for the issue.


Imagine your a factory owner and you need a chemical delivered from across the country, but the chemical is dangerous and if the tanker truck drives faster than 50 miles per hour it has a 0.001% chance per mile of exploding.

You hire an independent contractor and tell him that he can drive 60 miles per hour if he wants to but if it explodes he accepts responsibility.

He does and it explodes killing 10 people. If the family of those 10 people has evidence you created the conditions to cause the explosion in order to benefit your company, you're probably going to lose in civil court.

Linus benefits from the increase velocity of people using AI. He doesn't get to put all the liability on the people contributing.


Cool analogy! Which has nothing to do with the topic in hand.

That is a nonsensical analogy on multiple levels, and doesn't even support your own argument.

Nice rebuttal.

Why would I put much effort into responding to a post like yours, which makes no sense and just shows that you don't understand what you're talking about?

Why would you put any effort into it at all?

Responsibility is an objective fact, not just some arbitrary social convention. What we can agree or disagree about is where it rests, but that's a matter of inference, an inference can be more or less correct. We might assign certain people certain responsibilities before the fact, but that's to charge them with the care of some good, not to blame them for things before they were charged with their care.

Who do they think writes Linux? The European Commission? They’re on the US tech stack whether they want to be or not, and nobody in Europe has the will or resources to pull a China and make their own alternative. More’s the pity.

Linux was created by an European. And there are many European distros. Even Canonical is European.

But that's besides the point. The point is no company owns linux so you're not tied to big tech even if they are the biggest contributors to the kernel.


Moreover for the folks in the back row...

We may see Canonical or other commercial Linux vendors come forward with a government or enterprise-flavored solution for all this. But the important thing to keep in mind is that they're not selling Linux per-se. As the GPL prohibits this, these companies sell support for their Linux distro instead. That revenue goes into improving Linux and maintaining their distro (e.g. Ubuntu). But even with all that money changing hands, that they do not own Linux, the Linux kernel, or any other shred of GPL licensed stuff.


2/3 major commercial Linux vendors are European, the author and BDFL of the Kernel is European and a ton of contributors of many projects are European (Qt and KDE come to mind). Yes IBM Hat has a lot of influence but they're not the only ones developing Linux.

I don’t think Sam Altman has claimed to be a tech genius, and I don’t think he needs to be one for the role he’s in, CEO and engineer are not the same thing and require different skill sets. If people want to attack him, there are probably better vectors than this one.

The real question is - is he actually a good CEO? Has he done any better for the company than someone else would have? I think that’s the real unanswered question and stands quite apart from any ethical or character critiques.


The definition of "good CEO" is too mercurial for that question to be answerable. A CEO can afford to make incredibly boneheaded decisions that others call genius, like Elizabeth Holmes or Sam Bankman-Fried did. Rushing to evaluate executive performance based on murky results or impressive transaction volume is how investors end up losing billions.

The title is indeed an obvious puff-piece, but it does reflect the general hatred towards America's professional liars. Maybe Sam has what it takes to be a great CEO, but posterity is already recording him as a dishonest sociopath and federal sycophant.


Are you basically saying that all CEOs are equally useless so it’s a meaningless question? I sympathise with the cynicism but I don’t think that’s entirely true. I do think there’s some way of evaluating performance here, even if I don’t know exactly what it is.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: