Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The anthropization of llms is getting off the charts.

What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.

I honestly believe there is a degree of sentience in LLMs. Sure, they're not sentient in the human sense, but if you define sentience as whatever humans have, then of course no other entity can be sentient.



>What's wrong with that? If it quacks like a duck... it's just a complex pile of organic chemistry, ducks aren't real because the concept of "a duck" is wrong.

To simulate a biological neuron you need a 1m parameter neural network.

The sota models that we know the size of are ~650m parameters.

That's the equivalent of a round worm.

So if it quacks like a duck, has the brain power of a round worm, and can't walk then it's probably not a duck.


You just convinced me that AGI is a lot closer then I previously thought, considering the bulk of our brains job is controlling our bodies and responding to the stimulus from our senses - not thinking, talking, planning, coding etc


A stegosaurus managed to live using a brain the size of a wallnut on top of a body the size of a large boat. The majority of our brains are doing something else.


Ok so you're saying that the technology to make AI truly sentient is there, we just need a little bit more computational power or some optimization tricks. Like raytracing wasn't possible in 1970 but is now. Neat.


Yes, in the same way that a human is an optimization of a round worm.


This isn't completely wrong though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: