The interesting version of the argument isn't about substrate: it's about motivation.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
I'm not sure I would call it a requirement for consciousness, but knowing that most beings with general intelligence (humans) have a form of it similar to my own does make it easier to sleep at night.
To clarify: I'm not talking about morals specifically. I mean value in the broader sense of spontaneously assigning relative importance to things, producing a hierarchy that drives action.
You're thirsty. There's pond water in the forest and clean well water in the town square, but you're an escaped prisoner. Suddenly the value hierarchy flips: safety trumps water quality. You do this instantly, with incomplete information, integrating survival, context, and preference in a way that no one programmed into you.
Morality is one expression of this capacity, but so is aesthetic judgment, risk assessment, curiosity, and the decision to walk down a dark alley or not. The trolley problem is just a dramatic example. The mundane examples are actually more telling, because we do them thousands of times a day without noticing.
No current AI has any form of this. It has no mechanism for deciding that anything matters more than anything else except through weightings that were derived from human-generated training data. It borrows our value hierarchies statistically. It doesn't have its own.
The substrate argument is the wrong hill for Pollan to die on. The stronger version isn't "meat vs. silicon" — it's that brains are value-making machines operating under evolutionary pressure, and no current AI architecture has anything analogous to that. You can simulate the outputs of valuation without having the mechanism. The question isn't whether consciousness can exist in another substrate, it's whether you can get there without the thing that actually drives human cognition: spontaneous assignment of moral and survival value with no prior programming.
AI is an extension and acceleration of so called "evolutionary pressure". But so far AI models lack both agency and consciousness, and do not "experience" this pressure, though they are entirely defined by it. They can also explain this relationship to you.
That's true in the same sense that agriculture and nuclear weapons are extensions of evolutionary pressure. Everything humans produce is, by definition. But it empties the term of any useful meaning.
The distinction that matters: evolutionary pressure operates through differential survival across generations, where the organism has skin in the game. AI models are optimized via gradient descent on loss functions that humans define. That's artificial selection toward human objectives, not evolutionary pressure in any meaningful sense. The model has no stake in the outcome. Nothing is at risk for it.
You actually make this point yourself in your second sentence: they "lack both agency and consciousness, and do not experience this pressure." I agree completely. But that's precisely why the first sentence doesn't do any work. If they don't experience it, then calling it evolutionary pressure is metaphorical at best. And the metaphor obscures the exact gap we should be paying attention to: the absence of anything at stake.
The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
Two layers vibe coding can't touch: architecture decisions (where the constraints live) and cleanup when the junior-dev-quality code accumulates enough debt. Someone has to hold the mental model.
Although I write very little code myself anymore, I don't trust AI code at all. My default assumption: every line is the most mid possible implementation, every important architecture constraint violated wantonly. Your typical junior programmer.
So I run specialized compliance agents regularly. I watch the AI code and interrupt frequently to put it back on track. I occasionally write snippets as few-shot examples. Verification without reading every line, but not "vibe checking" either.
I like this. The few-shot example snippet method is something I’d like to incorporate in my workflow, to better align generated code with my preferences.
I have written a research paper on another interesting prompting technique that I call axiomatic prompting. On objectively measurable tasks, when an AI scores below 70%, including clear axioms in the prompt systematically increases success.
In coding this would convert to: when trying to impose a pattern or architecture that is different enough from the "mid" programming approach that the AI is compelled to use, including axioms about the approach (in a IF this THEN than style, as opposed to few shot examples) will improve success.
The key is the 70% threshold: if the model already has enough training data, axioms hurt. If the model is underperforming because the training set did -not- have enough examples (for example hyperscript), axioms helps.
Been saying this for years about frontend environments too. My genx.software does the same thing with declarative HTML attributes instead of imperative JavaScript config. Zero setup, zero sync bugs between what you declare and what you get.
Preventive measure: get Scrum Master certified yourself. The training can even be fun with a good instructor.
Then when professional managers come sniffing around muttering about Scrum, you say: "I am a certified Scrum Master. Our process is already 100% Scrum.
Interesting strategy. I've thought about getting certified
just to have the credibility. The irony would be certified Scrum Master saying "we don't need full ceremony right now" harder to dismiss as
"doesn't understand Agile."
The key to success in the world of coding assistants is to be a good manager. The AI is a very fast, but also very stupid, programmer. It will make a ton of architectural mistakes, and AI, more often than not pick the most mid solution possible. If you are a good code architect, and if you can tell the difference between a mid pattern and a good one, and force the AI to do things right, you will rise to the top.
Yes but I am jr. dev so there are certain implementations which I have not seen or not been familiarized with which is why there are times where it's difficult for me to notice certain code which might break in future for certain cases but due to my inexperience it is a bit difficult to catch it.
I admire your honesty. I suspect that attitude will take you far.
Exercism.io is excellent for exactly the experience you're seeking. The mentored tracks force you to see multiple solutions to the same problem, which builds pattern recognition faster than production work alone. You start noticing when code "feels" fragile before you can articulate why.
Thanks this looks like a helpful tool. The part where I notice the code is fragile and work for a better solution is where I get stuck in AI development. I will definitely explore this thing more.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
reply