Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, it is really interesting the difference in comments between ALife and AI stories on HN.

For some of you out there, there's a great book that really hasn't gotten enough attention called "The Self-Assembling Brain" [1] that explores intelligence (artificial or otherwise) from the perspectives of AI, ALife, robotics, genetics, and neuroscience.

I hadn't realized the divide was a sharp as it is until I saw the difference in comments. i.e. this one[2] about GPT-5 has over 1000 comments of emotional intensity while comments on OP story are significantly less "intense".

The thing is, if you compare the fields, you would quickly realize that which we call AI has very little in common which intelligence. It can't even habituate to stimuli. A little more cross disciplinary study would help is get better AI sooner.

Happy this story made it to the front page.

[1]: https://a.co/d/hF2UJKF

[2]: https://news.ycombinator.com/item?id=42485938



thanks for your resources. I am myself concerned with the question of artificial life, and I wonder if it is even possible to search for it, or rather it will emerge on its own. Perhaps, in a sense, it is already emerging, and we humans are its substrate...


I'm not even sure that the goal of Artificial Life is actually "life", although that may be the AGI equivalent of ALife -- AGL or "Artificial General Life"?. In practice I think the discipline is much closer to the current LLM hype around "Agentic AI", but with more of a focus around the environment in which the agents are situated and the interactions between communities of agents.

Much like the term "Artificial Intelligence", the term ALife is somewhat misleading in terms of the actual discipline.

The overlap between "agentic AI" and ALife is so strong it's amazing to me that there is so little discussion between the fields. In fact it's closer to borderline disdain!


Apart from the obvious distinction that many of us on HN are making (or trying to make) money on LLMs I think you've also hit a broader point.

There appears to be a class of article that have a relatively ratio of votes to comments, and concerns such topics as, e.g. Programming Language Theory or high-level physics. These are of broad interest and probably are widely read, but are difficult to make a substantial comment on. I don't think there are knee-jerk responses to be made on Quantum Loop Gravity, so even asking an intelligent question requires background and thought and reading the fine article. (Unless you're complaining about the website design.)

The opposite is the sort of topic that generates bikeshedding and "political" discussion, along with genuine worthwhile contributions. AI safety, libertarian economics, and Californian infrastructure fall into this bucket.

This is all based on vibes from decades of reading HN and its forerunners like /. but I would be surprised if someone hasn't done some statical analyses that support the broad point. In fact I half remember dang saying that the comments-to-votes ratio is used as an indicator of topics getting too noisy and veering away from the site's goals.


> many of us on HN are making (or trying to make) money on LLMs

I’d also highlight the misalignment between creating better AI and working towards AGI and extracting value right now from LLMs (and money investors).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: