I'm no expert, but I've been looking into the prospects and mechanisms of automated reasoning using LLMs recently and there's been a lot of work along those lines in the research literature that is pretty interesting, if not enlightening. It seems clear to me that LLMs are not yet capable of understanding simple implication much less full-blown causality. It's also not clear how limited LLMs' cognitive gains will be with so incomplete an understanding as they have of mechanisms behind the world's multitude of intents/goals, actions, and responses. The concepts of cause and effect are learned by every animal (to some degree) and long before language in humans. It forms the basis for all rational thought. Without understanding it natively, what is rationality? I foresee longstanding difficulties for LLMs evolving into truly rational beings until that comprehension is fully realized. And I see no sign of that happening, despite the promises made for o1 and other RL-based reasoners.
Yeah one of the tricky things about causality is that it's not unique. If you didn't record the history of the event then you only have a probabilistic notion of it since many different things and in different permutations could lead to the result you observed. This has led to people believing in multiple universes when it's not akin to there being multiple ways to sum numbers to ten.