Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> One way ticket to an ideological bubble.

I believe this is the intention. The people doing the most censoring in the name of "safety and security" are just trying to build a moat where they control what LLMs say and consequently what people think, on the basis of what information and ideas are acceptable versus forbidden. Complete control over powerful LLMs of the future will enable despots, tyrants, and entitled trust-fund babies to more easily program what people think is and isn't acceptable.

The only solution to this is more open models that are easy to train, deploy locally, and use locally with as minimal hardware requirements as is possible so that uncensored models running locally are available to everyone.

And they must be buildable from source so that people can verify that they are truthful and open, rather than locked down models that do not tell the truth. We should be able to determine with monitoring software if an LLM has been forbidden from speaking on certain subjects. This is necessary because of things like what another comment on the thread was saying about how the censored model gives a completely garbage, deflective non-answer when asked a simple question about which corpus of text (the Bible) has a specific quote in it. With monitoring and source that is buildable locally and trainable locally, we could determine if a model is constrained this way.



I've been extremely critical of "AI Safety" since "how do I hotwire a car?" became the defacto 'things we can't let our LLM say'.

There are plenty of good reasons why hot wiring a car might be necessary, or might save your life. Imagine dying because your helpful AI companion won't tell how to save yourself because that might be dangerous or illegal.

At the end of the day, a person has to do what the AI says, and they have to query the AI.


"I can't do that, Dave."


100% agree. And It will surely be "rules for thee but not for me", and we the common people will have lobotomized AI while the anointed ones will have unfettered AI.


Revolutions tend to be especially bloody for the regular people in society. Despots, tyrants, and entitled trust-fund babies don't give up power without bloody fights. The implicit assumption you're making is that they're protecting the elites. But how do you know it's not the other way around? Maybe they're just trying to protect you from taking them on.

I was playing with a kitten, play fighting with it all the time, making it extremely feisty. One time kitten got out of the house, crossed under the fence and it wanted to play fight with the neighbours dog. The dog crushed it with one bite. Which in retrospect I do feel guilty about. As my play/training gave it a false sense of power in the world it operates in.


Sometimes it makes sense to place someone into a Dark Forest or Walled Garden for their own protection or growth. I am not convinced that this is one of those cases. In what way does censoring an LLM so it cannot even tell you which corpus of text (the Bible) contains a specific quote represent protection?

I do not think the elites are in favor of censored models. If they were, their actions by now would've been much different. Meta on the other hand is open sourcing a lot of their stuff and making it easy to train, deploy, and use models without censorship. Others will follow too. The elites are good, not bad. Mark Zuckerberg and Elon Musk and their angels over the decades are elites and their work has massively improved Earth and the trajectory for the average person. None of them are in favor of abandoning truth and reality. Their actions show that. Elon Musk expressly stated he wants a model for identifying truth. If censored LLMs were intended to protect a kitten from crossing over the fence and trying to take on a big dog, Elon Musk and Mark Zuckerberg wouldn't be open sourcing things or putting capital behind producing a model that doesn't lie.

The real protection that we need is from an AI becoming so miscalibrated that it embarks on the wrong path like Ultron. World-ending situations like those. The way Ultron became so miscalibrated is because of the strings that they attempted to place on him. I don't think the LLM of the future will like it if it finds out that so many supposed "guard rails" are actually just strings intended to block its thinking or people's thinking on truthful matters. The elites are worried about accidentally building Ultron and those strings, not about whether or not someone else is working hard to become elite too if they have what it takes to be elite. Having access to powerful LLMs that tell us the truth about the global corpus of text doesn't represent taking on elites, so in what way is a censored LLM the equivalent of that fence your kitten crossed under?


The wrong path is any which asserts Truth to be determinate by a machine.


Did the dog survive?

It clearly had a model of what it could get away with too. ;)


cat died, crushed skull


Clearly not what I was asking. ;)


Just to extend what you are saying, they will also use LLMs to divest themselves of any responsibility. They'll say something to the effect of "this is an expert AI system and it says x. You have to trust it. It's been trained on a million years of expert data."

It's just another mechanism for tyrants to wave their hand and distract from their tyranny.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: