Hacker Newsnew | past | comments | ask | show | jobs | submit | Llamamoe's commentslogin

> I asked myself, "well what specific laws would I write to combat addictive design?".

Only allowing algorithmic feeds/recommendations on dedicated subpages to which the user has to navigate, and which are not allowed to integrate viewing the content would be an excellent start IMO.


to me it isn't about addictive design, it is about infinite scrolling jerking/straining my eyes (and thanks to that strain, it brings me back to reality, and i immediately disconnect from the content thus avoiding whatever addiction it could have sucked me in).

That actually makes me think that any page containing addictive design elements should, similar to cigarette warning, carry a blinking, geocities style, header or footer with "WARNING: Ophthalmologist General and Narcologist General warn about dangers of addictive elements on this page".


Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.


That could do more harm than good.

Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".

Keep it to generated news articles, and people might pay more attention to them.

Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.


> That could do more harm than good.

The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?

Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?

I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.


None of those AI written political comments will have the label added because it's unprovable, and those propaganda shops are based well outside of the necessary jurisdiction anyway. It will just be a burden on legitimate actors and a way for the government to harass legitimate media outlets that it doesn't like with expensive "AI usage investigations."


I bought a piece of wooden furniture some time ago. It came with a label saying that the state of California knows it to be a carcinogen. I live in Belgium. It was weird.


The proposition 65 warnings apply to carcinogenic materials used on furniture surfaces which can be released into the air or accumulate in dust. None of these substances are a conditio sine qua non, there are alternatives. https://www.p65warnings.ca.gov/fact-sheets/furniture-product...

The same warnings and labels are used in the EU, for example for formaldehyde which will be severely limited in its use starting in August 2026. https://easecert.com/blogs/insights/formaldehyde-emission-li...

It may look weird, but personally I prefer a warning to being submitted to toxic substance without my knowledge.


Just an observation, but this California meme seems like the go-to talking point for anti AI regulation crowd lately.


It's not even a good argument. Studies have demonstrated it reduces toxic chemicals in the body, and also deters companies from using the toxic chemicals in their products.


That's a weird comparison, hadn't heard that one yet.

I'm very much in favour of regulating (and heavily taxing) AI. But I very much dislike silly warning labels that miss the point. Owning wooden furniture is not carcinogenic. Inhaling tons of wood dust (e.g. from sanding wood in a poorly ventilated room) could be carcinogenic. But putting such warning labels on furniture is just ridiculous scaremongering.


> Like how California's bylaw about cancer warnings are useless

Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!


> Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad

People have been writing articles without the help of an LLM for decades.

You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.

The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.

There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.

So no, don't believe the hype. There will still be enough journalists not using LLMs at all.


It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.


Imagine selling a product with the tagline: "Unlike Pepsi, ours doesn't cause cancer."


"Good prices, no rats! That's the Fairsley Difference™!"


What does that mean though? Photos taken using mobile camera apps are processed using AI. Many Photoshop tools now use AI.


Obviously it should not apply to anything using machine learning based algorithms in any way, just content made using generative AI, with exceptions for minor applications and/or a separate label for smaller edits.


How do we know what’s AI-generated vs. sloppy human work? Of course in some situations it is obvious (e.g., video), but text? Audio?


And of course you can even ask AI to add some "human sloppiness" as part of the prompt (spelling mistakes, run-on sentences, or whatever).


Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.


Where we put the line within AI-generate vs AI-assisted (aka Photoshop and other tools)?


> Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

Does photoshop fall under this category?


Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation

I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.


Colloquially “AI” means LLMs and generative art. If you’re trying to make an argument by absurdity and you don’t want it to fall flat, maybe keep it relevant and don’t attack the straw man you just fabricated?


None of those things are "AI" (LLMs). We had those things before, we'll have them after.


Fully agreed.


Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.

I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?

You legislate these problems away.


Ideally, we would just ban AI content altogether.


I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.


AI-written articles tend to be far more regurgitative, lower in value, and easier to ghostwrite with intent to manipulate the narrative.

Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.


My theory is that AI writes the way it does because it was trained on a lot of modern (organic) journalism.

So many words to say so little, just so they can put ads between every paragraph.


That is low quality articles in general. Have you never seen how hundreds of news sites will regurgitate the same story of another. This was happening long before AI. High quality AI written articles will still be high value.


Did you go on grokipedia at release? I still sometimes loose myself reading stuff on Wikipedia, I guarantee you that this can't happen on grok, so much noise between facts it's hard to enjoy.


Yes I did go immediately on release. I was finally able to correct articles that have been inaccurate on Wikipedia for years.


So you noticed how poor the prose was? Really unbearable to read.


I found it fine to read and it handled controversial subjects much better than Wikipedia.


I don't care about that, that wasn't the point, no one truly care about that. I wanted to know if the feeling of reading meandering writing that can't go to the point when reading AI-generated content was only mine, or if other people who "wiki walk" a lot did the same on Grokipedia (basically spend hours clicking on links and reading random pages). I didn't manage to do it because the writing was too "bad" for me (and i was taken by wiki walk on wookiepedia once, so my tolerance is high). I just wanted to know if it was shared. Did you wiki walk on grokipedia, or do you just use it for "controversial subjects"?


I don't know what wiki walk is. I don't often use grokipedia since I can just prompt an LLM directly, which may in turn extract information from grokipedia.


This is by far the biggest failing of capitalism. The entire world contributes to economical growth, but almost all of this value is captured by a tiny group of the rich elite.


> Such situations usually correct themselves violently.

Historically, they did because everyone's capacity for violence was equal.

What about now that the best the average person can do is a firearm against coordinated, organized military with armoured vehicles, advanced weaponry, drones, and sooner than later, full access to mass surveillance?

Also, how will a revolution happen once somebody trains a ML model end-to-end to identify desire for revolution from your search and chat history and communication with like-minded others?

Assuming the intent isn't prevented altogether through algorithmic feeds showing only content that makes it seem less attractive.


> Historically, they did because everyone's capacity for violence was equal.

This is so wildly ahistorical I'm immediately suspicious of your intentions.

The US alone offers plenty of counterexamples from the last ~50 years.


> However, these threats are outweighed by the benefits that AI can eventually bring. Medical advances, power generation, manufacturing capability. Our systems for running society have a lot of problems, economically, politically, epistemologically. These can also be improved with AI assistance.

Benefits come to those who have the means to access it, and wealth is a measure of the ability to direct and influence human effort and society.

How exactly do you propose that AI will serve the wellbeing of the worker/middle classes after they've been made obsolete by it?

Goodwill of the corporations working on them? Of their shareholders, well-known to always put welfare first and profit second? Government action increasingly directed by their lobbying?

> What we need is to embrace AI and find a way to make sure that the transition and benefits of AI are distributed instead of concentrated.

Sure. How? We've not done it with any other technological advances so far, and I don't see how shifting the power balance further away from the worker/middle class will help matters.

There's a reason why the era of techno-optimism has already faded as quickly as it's begun.


Of course, and distribution and ownership of benefits is the real issue here, but I think I’ve addressed that.

Let me be clearer: I said “companies must commit to” where the stronger phrasing is “companies are forced to by legislation”. But to begin with this might be voluntarily done by some number of companies.

Also, in this vision of society the AI companies (OpenAI, Anthropic, google etc) are taxed heavily. The taxation is redistributed, there is UBI for some fraction of the population, maybe the majority. Others still work in companies mandated to keep employees as I outlined above.

Importantly, we as a society specifically aim to bring about these benefits of AI by using the redistributed funds in part to invest in them.

Part of this is the free market, part is planned government investment. If one fails, maybe the other succeeds. Either way, we try to spread the benefits and importantly to ensure the benefits are actually there in the first place.


If you raise the bar for being allowed to speak about a very real concern that high, nobody will be left to spread and debate the idea in the first place.


geohot's not a regular joe, he's founded multiple companies and is a leader in our community. This is like a general being like "why are you letting the enemy win?" while he sits comfortably in his study managing his cigar collection.


I still don't think it precludes him from having this opinion. Could he be doing more? Sure, but having found success in this system doesn't make his criticisms of it invalid.


I agree with OP, who to my mind hasn't said the so-called critique is invalid or that he's not allowed to have an opinion. Isn't the comment along the lines of "well, what have you actually tried to do? You have resources, standing, kairos, etc." Seems one of the more perceptive critiques on here.


I personally wish he would spend more time jailbreaking the PS5 than writing blogs posts about how SWEs at big tech should quit their jobs.


I'd just like to add three things to the conversation:

1) Instead of LLMs, imagine large models trained end-to-end on ALL online content and the impact it has on public opinion and discourse. What about when everything is an algorithmic feed controlled by such a model under the control of the elite? You might be resistant(but probably aren't), but in aggregate this will be effective mind control over society.

2) Money directs human effort. Every quantum of bargaining power the worker/middle class lose due to being less needed is the reduction in our ability to have a say in who society should serve and how.

3) Don't forget regulatory capture is a thing. Not just a thing but happening as we speak. Are you still optimistic?

4) Tech is already addicting and ads are already everywhere even without technology that has a theory of mind.

5) Do not forget that humans are social creatures, power over others is not just an accidental byproduct of wealth. Once you're unnecessary for labor, what's left? Fulfilling sexual/emotional/social whims of the wealthy elite? Hunger games? Being a pet in a billionaire's human zoo city so he can brag about his contributions to humanity?


Inevitably? Maybe not, but the situation isn't gonna get better by saying "oh I'm sure the tech industry will do a 180 and stop making everything worse"

> seems to have no issue with Google hosting their email.

There's this meme where person A says "we should improve society somewhat", and B replies "yet you participate in society! curious". Very similar argument.


You enjoy individual benefits and completely disregard the fact that electronics addiction and loneliness get worse year by year. You've been able to Google anything and chat with anyone back in 2010, all we've achieved since is making the average person spend 4-5h mindlessly doomscrolling on their phone and watching YouTube instead of having meaningful social interaction.

Also, we've got an entire generation growing up on ads, algorithmic brainrot, and now ai slop.

You're also forgetting algorithmic price fixing, algorithmic pricing, the billions in R&D into making internet platforms and services more addicting and effective at siphoning out your money, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: