Hacker Newsnew | past | comments | ask | show | jobs | submit | archagon's commentslogin

Compilers are an abstraction. AI coding is not an abstraction by any reasonable definition.

You're only thinking that because we're mostly still at the imperative, REPL stage.

We're telling them what to do in a loop. Instead we should be declaring what we want to be true.


You’re describing a hypothetical that doesn’t exist. Even if we assume it will exist someday we can’t reasonably compare it to what exists today.

It exists today, please message me if you’d like to try it

I mean, no, why would it be? There is so, so much to talk about in programming other than AI. Meanwhile, the current HN front page feels like 90% LLM spam: the complete antithesis of what I used to come here for.

I personally can’t wait for no-ai communities to proliferate.


Taking your estimate as a superlative, it would be asinine for the community here to censor AI-targeted discussions in the way I think you'd like to. The same goes for a programming community that censors discussions about LLM programming.

You are basically asking for a brain drain in a field that—like it or not—is going to be crucial in the future in spite of its obvious warts and poor implementation in the present. If that's what you want, be my guest and encourage it; but who's authorized to unilaterally make that decision in a given forum?

In the present case, the moderators for r/programming are. But they're making a mistake by marginalizing the technology that's redefining the practice because people talk about it too much instead of thinking about how to effectively talk about it and then steering the community in that direction.

But that's a full-time job. Which is why I think HN may turn out alright in the long run or a similar community will replace it if it fails to temper the change in the industry.

What this decisions signals to me is that r/programming has been inert for some time. I'm sure plenty of programmers, irrespective of their position on AI, probably find the community rejoicing in their resignation to the technologies influence as their cue to finally exit.


You well on the path to AI-fueled psychosis if you genuinely believe this.

I genuinely believe this. Even if you're inventing a new algorithm it is better to describe the algorithm in English and have AI do the implementation.

At least it's more productive than AI Derangement Syndrome.

So many insecure AI boosters in the comments slandering and mocking the author. And yet the upvotes clearly indicate that the sentiment in the article resonates widely with the community.

Well, there’s not much of a point leaving a comment saying “yes, this, exactly this,” so I’ll leave one here on behalf of my fellow lurkers.

The more AI gets shoved down my throat, the less I’m inclined to use it for anything, and the more I’m inspired to write my own writing, make my own art, and create my own code — with great creative joy and burning anger. Enjoy your 1000x productivity gains and your inevitable burnout as you downskill to a glorified inference loop.


This user keeps creating new accounts, getting downvoted, and replacing the content of their comments with a period. Why???

The most charitable explanation is that they are concerned about their own privacy or identifiability,Bl but ultimately it is a Dick Move™ to other participants.

It's the kind of late-edit things that spurs me to include quotes in replies.


Traditionally, large corporations have taken very conservative legal stances with regard to integrating e.g. A/GPL code, even when there's almost no risk.

If my license explicitly says "any LLM output trained on this code is legally tainted," I feel like BigAICorp would be foolish to ignore it. Maybe I couldn't sue them today, but are they confident this will remain the case 5, 10, 20 years from now? Everywhere in the world?


Github has posted that they will now train on everyone's data (even private) unless you opt out (until they change their mind on that). Anthropic has been training on your data on certain tiers already. Meta bittorrented books to train their models.

Surely if your license says "LLM output trained on this code is legally tainted", it is going to dissuade them.


No, it won’t dissuade them. But when we finally get the chance to legally beat the shit out of these companies, I want to reserve my place in line.

Alternatively, they can learn to trust me on this and simply exclude/evict my code from the training corpus.


> I've been looking for a copy-left "source available" license that allows me to distribute code openly but has a clause that says "if you would like to use these sources to train an LLM, please contact me and we'll work something out". I haven't yet found that.

Personally, I want a viral (GPL-style) license that explicitly prohibits use of code for LLM training/tuning purposes — with the asterisk that while current law might view LLM training as fair use, this may not be the case forever, and blatant disregard of the terms of the license should make it easier for me to sue offenders in the future.

Alternatively, this could be expressed as: the output of any LLM trained on this code must retain this license.


Hah! Is your cat bed ergonomically adjusted?

From my vantage point, it looks like they’re getting on unicycles as clown music starts to play. And they’re the ones yelling at me.

Empathy hijacking. If the chatbots framed their responses as “beep boop, I’m a robot, here’s an estimated answer to your query” then we likely wouldn’t have this problem.

I’ve noticed out of the box that current Claude is far less inclined to do that empathizing stuff than current ChatGPT.

Chat wants to be a weird mix of buddy, toady, and Buzzfeed editor (emoji, “one weird thing”).

Claude, while not perfect, is more “work colleague”, and far more preferable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: