Religion is a signalling mechanism itself. Everybody knows there is bad stuff in there. Like fireflies synchronising their flashes. Or magnetization has locked in the spin states and now changing the direction is very difficult.
Max Tegmark, a cosmologist and MIT professor, is known for his "provocative ideas" and has a self-imposed rule regarding his work: "Every time I've written ten mainstream papers, I allow myself to indulge in writing one wacky one". This approach allows him to pursue unconventional, "crazy" theories without jeopardizing his reputation as a serious scientist.
The "God of the gaps" theory is a theological and philosophical viewpoint where gaps in scientific knowledge are cited as evidence for the existence and direct intervention of a divine creator. It asserts that phenomena currently unexplained by science—such as the origin of life or consciousness—are caused by God.
We are doing inversion of God of gaps to "LLM of Gaps" where gaps in LLM capabilities are considered inherently negative and limiting
It is not actually the gaps in capability, and instead it arises from an understanding of how it works and an honest acknowledgement of how far it could go.
Spirituality helps. Listen to Ram Dass, Alan Watts, Adyashanti, etc. They help make sense of the macro and micro picture of life. Think of it as narrative oinment for your thinking mind and narrative center of gravity. Check your vitamin D levels, get tests done. My vitamin D levels were so low that I considered it as an effort to even fart. I am not joking.
> With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?
I think this is true. Your point supports it. If either the explanation / intention or the code changes, the other can be brought into sync. Beautiful post. I always hated the fact that research papers don't read like novels, eg "ohk, we tried this which was unsuccessful but then we found another adjacent approach and it helped."
Computer Scientist Explains One Concept in 5 Levels of Difficulty | WIRED
Computer scientist Amit Sahai, PhD, is asked to explain the concept of zero-knowledge proofs to 5 different people; a child, a teen, a college student, a grad student, and an expert. Using a variety of techniques, Amit breaks down what zero-knowledge proofs are and why it's so exciting in the world of cryptography.
I have curated my youtube recommendations over the years. It knows my likes and dislikes very well. It knows about me a lot.
The same moat exists in interactions with Claude. Claude remembers so many of preferences. It knows that I work in Python and Pandas and starts writing code for that combination. It knows about what type of person I am and what kind of toys I want my nephews and nieces to play. These "facts" about the person are the moat now. Stackoverflow was a repository of "facts" about what worked and what didn't. Those facts or user chat sessions are now Anthropic's moat.
You are missing the correlations that Claude can derive across all these user sessions across all users. In Google analytics, when I visit a page and navigate around till I find what I was looking for or didn't find it, that session data is important for website owners how to optimize. Even in Google search results, when I think on 6th link and not the first, it sends a signal how to rearrange the results next time or even personalize. That same paradigm will be applicable here. This is network effects and personalization and ranking coming togther beautifully. Once Anthropic builds that moat, it will be irreplaceable. If not, ask all users to jump from Whatsapp to Telegram or Signal and see how difficult it is. When anthropic gives you the best answer without asking too much, the experience is 100x better.
The underlying technology is a thin layer of queryable knowledge/“memories” in between you and the llm, that in turn gets added to the context of your message to the llm. Likely RAG. It can be as simple as a agents.md that you give it permission to modify as needed. I really don’t think that they are correlating your “memories” with other people’s conversations. There is no way for the LLM to know what is or isn’t appropriate to share between sessions, at the moment. That functionality may exist in the future, but if you just export your preferences, it still works.
The moat - at this point in time - is really not as deep and wide as you are making it out to be. What you are imagining doesn’t exist yet. Indexing prior conversations is trivially easy at this point, you can do it locally using an api client right this moment.
Besides all that, you will be shocked at how quickly a new service can reconstruct your preferences. I started a new YouTube account, and it was basically the same feed within a few days.
In any case, my feeling is that we should have learned at this point not to keep our data in someone else’s walled garden.
> Besides all that, you will be shocked at how quickly a new service can reconstruct your preferences. I started a new YouTube account, and it was basically the same feed within a few days.
Because your location data, wifi name and etc hones in on the fact this is the same person as before. You are actually supporting my point than denying it.
compare that behaviour with Warren Buffet or Charlie Munger. They wanted more money only to pursue their other interests. They succeeded in earning more money than imaginable.
reply