Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think what we should focus on is the volume of misinformation in general, not the provenance of it.

The NYT may produce misinformation but it aims not to, and its staff of human writers are limited in the quantity that they can produce. They also publish corrections.

GPT enables anyone who can pay to generate a virtually unlimited volume of misinformation, launder it into 'articles' with fake bylines and saturate the internet with garbage.

I think we need to focus on the damage done.



Well that's true for any large language model. As long as they exist there will be a deluge of bot written text producible for any purpose. At this point there is no getting the cat back into the bag.

In that case the bigger danger is Open source LLM's. OpenAI at least monitors the use of their endpoints for obvious harm.


> The NYT may produce misinformation but it aims not to, and its staff of human writers are limited in the quantity that they can produce. They also publish corrections.

Except when it affects their bottom line of course, they publicly lied on how meta tags work during the lawsuits against Google to get more money (like most newspapers did). And I have no doubt that they will extensively lie once again on how LLM really work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: