This book lines up with a lot of what I've been thinking: the centrality of prediction, how intelligence needs distributed social structure, language as compression, why isolated systems can't crack general intelligence.
But there are real splits on substrate dependence and what actually drives the system. Can you get intelligence from pure prediction, or does it need the pressure of real consequences? And deeper: can it emerge from computational principles alone, or does it require specific environmental embeddedness?
My sense is that execution cost drives everything. You have to pay back what you spend, which forces learning and competent action. In biological or social systems you're also supporting the next generation of agents, so intelligence becomes efficient search because there's economic pressure all the way down. The social bootstrapping isn't decorative, it's structural.
By that logic, wouldn't the electric kettle heating water for the coffee be intelligent? Had it not measured heat when activated, it wouldn't know how to stop and the man would have thrown it away or at least stopped paying for the kettle's electricity.
I think we need a meta layer - ability to reason over one's own goals (this does not contradict the environment creating hard constraints). The man has it. The machine may have it (notably a paperclip maximizer will not count under this criteria). The crow does not.
Yes, if only a tiny amount. The example I use is a toilet cistern, when explaining this to children. It’s probably the closed loop control system with which they have the most firsthand experience, so they understand it best. Also toilet funny haha.
You could say that that, yes, that kettle is intelligent, or smart, as in smart watch. But the intelligence in question clearly derives from the human who designed that kettle. Which is why we describe it as artificial.
Similarly, a machine could emulate meta-cognition, but it would in effect only be an reflection and embodiment of certain meta-cognitive processes originally instantiated in the mind which created that machine.
Don't "real" consequences apply for setting weights? There's an actual monetary cost to train these models, and they have to actually perform to keep getting trained. Sure it's VC spend right now and not like, biological reproduction driving the incentives ultimately, but it's not outside the same structure.
Yes, but the (semi-)autonomous entity you're referring to now is the whole company, including all who work there and design the LLM system and negotiate contracts and all that. The will to persist and expand of all those humans together result in the will to expand of the company which then evolves those systems. But the systems themselves don't contribute to that collective will.
Depending on the time horizon the predictions change. So we get layers - what is going to happen in the next hour/tomorrow/next year/next 10 years/next 100 etc (and layers of compression of which language is just one) and that naturally produces contradictions which creates bounds on "intelligence".
It really is a stupid system. No one rational wants to hear that, just like no one religious wants to hear contradictions in their stories, or no one who plays chess wants to hear its a stupid game. The only thing that can be said about the chimp intelligence is it has developed a hatred of contradictions/unpredictability and lack of control unseen in trees, frogs, ants and microbes.
Stories becomes central to survive such underlying machinery.
Part of the story we tell is no no we don't all have to be Kant or Einstein because we just absorb what they uncovered. So apparently the group or social structures matters. Which is another layer of pure hallucination. All social structures if they increase the prediction horizon also generate/expose themselves to more prediction errors and contradictions not less.
So again Coherence at group level is produced through story - religion will save us, the law will save us, trump will save, the jedi will save us, AI will save us etc. We then build walls and armies to protect ourselves from each others stories. Microbes don't do this. They do the opposite and have produced the krebs cycle, photosynthesis, crispr etc. No intelligence. No organization.
Our intelligence are just bubbling cauldrons at the individual and social level through which info passes and mutates. Info that survives is info that can survive that machinery. And as info explodes the coherence stabilization process is over run. Stories have to be written faster than stories can be written.
So Donald Trump is president. A product of "intelligence" and social "intelligence". Meanwhile more microbes exist than stars in the universe. No Trump or ICE or Church or data center is required to keep them alive.
If we are going to tell a story about Intelligence look to Pixar or WWE. Don't ask anyone in MIT what they think about it.
The MIT vs. WWE contrast feels like a false dichotomy. MIT represents systematic, externalized intelligence (structured, formal, reductive, predictive). WWE or Pixar represent narrative and emotional intelligence. We do need both.
Also evolution is the original information-processing engine, and humans still run on it just like microbes. The difference is just the clock speed. Our intelligence, though chaotic and unstable, operates on radically faster time and complexity scales. It's an accelerator that runs in days and months instead of generations. The instability isn’t a flaw: it’s the turbulence of the way faster adaptation.
I think that’s a bit of a false take. The earlier point wasn’t pivot on a specific definition of EQ (pop-psychology take), but about the contrast between systematic intelligence (like MIT) and the storytelling ability (WWE) needed to create a coherent story that makes sense. Whatever you want to call it, we clearly need both.
It’s hard not to see consciousness (whatever that actually is) lurking under all this you just explained. If it’s emergent, the substrate wars might just be detail; if it’s not, maybe silicon never gets a soul.
But there are real splits on substrate dependence and what actually drives the system. Can you get intelligence from pure prediction, or does it need the pressure of real consequences? And deeper: can it emerge from computational principles alone, or does it require specific environmental embeddedness?
My sense is that execution cost drives everything. You have to pay back what you spend, which forces learning and competent action. In biological or social systems you're also supporting the next generation of agents, so intelligence becomes efficient search because there's economic pressure all the way down. The social bootstrapping isn't decorative, it's structural.
I also posted yesterday a related post on HN
> What the Dumpster Teaches: https://news.ycombinator.com/item?id=45698854