Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is that a more convoluted way to say that a next thing predictor can't exhibit complex behavior? Aka the stochastic parrot argument. Or that one modality can't be a good enough proxy for the other. If so, you probably have to pay more attention to the interpretability research.

But actually most people should start with strong definitions. Consciousness, intelligence, and other adjacent terms have never been defined rigorously enough, even if a ton of philosophers think otherwise. These discussions always dance around ill-defined terms.



Neurobio is built from the base units of consciousness outwards, not intuited interpretation. Eg prediction has nothing inherent to do with consciousness directly. That’s a process imposed on brains post hoc.

https://pubmed.ncbi.nlm.nih.gov/38579270/

And

https://mitpress.mit.edu/9780262552820/the-spontaneous-brain...

Easily refute prediction or error prediction as fundamental.

The path to intelligence or consciousness isn’t mimicry of interpretation.

In terms of strong definitions, start at the base, coders: oscillation, dynamics, Topologies, sharp wave ripples, and I would say roughly 60 more strongly defined material units and processes. This reverse intuition is going nowhere and it’s pseudoscientific nonsense for social media timeline filling.


I started writing the counterargument, but somehow I think you have a weird idea of what both interpretability in ML and neurobiology are, especially seeing how you're dealing with things nobody has a full idea about in such absolutes


Fundamentally incorrect across the board. we study ML for any signs of parallel function even from a tinkering level. Nope.

Look at Unlocking The brain both volumes, rhythms of the brain and the brain from inside out, and these are the tip.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: