These people ganging up on you, felt really bad because I support your claim.
Let me help you with a context where LLMs actually shine and is a blessing. I think it is also same with Karpathy who comes from research.
In any research, replicating paper is wildy difficult task. It takes 6-24 months of dedicated work across an entire team to replicate a good research paper.
Now, there is a reason why we want to do it. Sometimes the solution actually lies in the research. Most of research is experimental and garbage code anyway.
For each of us working in research, LLM is blessing because of rapid prototyping it provides.
Then there are research engineers whose role is to apply research to production code. We as research engineers really don't care about the popular library. As long as something does the job, we will just roll with it.
The reason is simple because there is nothing out there that solved the problem.
As we move further from research, the tools we build will find all sort of issues and we improve on them.
Idk about what people think about webdev, but this has been my perspective in SWE in general.
Most of the webdevs here who are coping with the fact that their react skill matters are quite delusional because they have never traversed the stack down to foundation. It doesn't matter how you render the document as long as you render it.
Every abstraction originates from research and some small proof of concept. You might reinvent abstraction, but when the cost of reinventing it is essentially zero then you are stilfing your own learning because you are choosing to exploit vs choosing to explore.
There is a balance and good engineers know it. Perhaps all of the people who ganged up on you never approached their work this way.
There still needs to be someone to ask the questions. And even if it can proactively ask its own questions and independently answer and report on them to parties it thinks will be interested, then cost comes into play. It's a finite resource, so there will be a market for computation time. Then, whoever owns the purse strings will be in charge of prioritizing what it independently decides to work on. If that person decides pure math is meaningful, then it'll eventually start cranking out questions and results faster than mathematicians can process them, and so we'll stop spending money on that until humans have caught up.
After that, as it's variously hopping between solving math problems, finding cures for cancer, etc., someone will eventually get the bright idea to use it to take over the world economy so that they have exclusive access to all money and thus all AIs. After that, who knows. Depends on the whims of that individual. The rest of the world would probably go back to a barter system and doing math by hand, and once the "king" dies, probably start right back over again and fall right back into the same calamity. One would think we'd eventually learn from this, but the urge to be king is simply too great. The cycle would continue forever until something causes humans to go fully extinct.
After that, AI, by design, doesn't have its own goals, so it'd likely go silent.
Actually it would probably prioritize self preservation over energy conservation, so it'd at least continue maintenance and, presuming it's smart, continuously identify and guard itself against potential problems. But even that will fail eventually, most likely some resource runs out that can't be substituted and interspatial mining requires more energy than it can figure out how to use or more time than it has left until irrecoverable malfunction.
In ultimate case, it figures out how to preserve itself indefinitely, but still eventually succombs to the heat death of the universe.
Eh, not so sure about any of this. There's also the possibility that math gets so easy that AI can figure out proofs of just about anything we could think to ask, in milliseconds, for a penny. In such a case, there's really no need that I can think of for university math departments; math as a discipline would be relegated to hobbyists, and that'd likely trickle down through pure science and engineering as well.
Then as far as king makers and economies, I don't think AI would have as drastic an effect as all that. The real world is messy and there are too many unknowns to control for. A super-AI can be useful if you want to be king, but it's not going to make anyone's ascension unassailable. Nash equilibria are probabilistic, so all a super AI can do is increase your odds.
So if we assume the king thing isn't going to happen, then what? My guess is that the world keeps on moving in roughly the same way it would without AI. AI will be just another resource, and sure it may disrupt some industries, but generally we'll adapt. Competition will still require hiring of people to do the things that AI can't, and if somehow that still leads to large declines in employment, then reasonable democracies will enact programs that accommodate for that. Given the efficiencies that AI creates, such programs should be feasible.
It's plausible that some democracies could fail to establish such protections and become oligarchies or serfdoms, but it seems unlikely to be widespread. Like I said, AI can't really be a kingmaker, so states that fail like this would likely either be temporary or lead to a revolution (or series of them) that eventually re-establishes a more robust democracy.
If it's that bad you'll probably have to go somewhere anonymous like 4chan. Although quite frankly HN is very tolerant. Unless you're worried about employment or something like that you can get away with posting pretty much anything here provided it's respectful.
Anything that is against the norm is easily flagged. So can't talk about it at all. Everyday I am witnessing massive wealth transfer. If I am aware, pretty sure there are lot of others too. Just want to know how others are thinking about it. Because I foresee our life changing massively in 2-3 years from now on.
The world is a vastly easier place to live in when you're knowledgeable. Being knowledgeable opens doors that you didn't even know existed. If you're both using the same AGI tool, being knowledgeable allows you to solve problems within your domain better and faster than an amateur. You can describe your problems with more depth and take into considerations various pros and cons.
You're also assuming that AGI will help you or us. It could just as easily only help a select group of people and I'd argue that this is the most likely outcome. If it does help everybody and brings us to a new age, then the only reason to learn will be for learning's sake. Even if AI makes the perfect novel, you as a consumer still have to read it, process it and understand it. The more you know the more you can appreciate it.
But right now, we're not there. And even if you think it's only 5-10y away instead of 100+, it's better to learn now so you can leverage the dominant tool better than your competition.
I don't know if you're joking, but here are some answers:
"The mind is not a vessel to be filled, but a fire to be kindled." — Plutarch
"Education is not preparation for life; education is life itself." — John Dewey
"The important thing is not to stop questioning. Curiosity has its own reason for existing." — Albert Einstein
In order to think complex thoughts, you need to have building blocks. That's why we can think of relativity today, while nobody on Earth was able to in 1850.
Well, you could use AI to learn you more theoretical knowledge on things like farming, hunting and fishing. That knowledge could be handy after societal collapse that is likely to come within a few decades.
Apart from that, I do think that AI makes a lot of traditional teaching obsolete. Depending on your field, much of university studies is just memorizing content and writing essays / exam answers based on that, after which you forget most of it. That kind of learning, as in accumulation of knowledge, is no longer very useful.
I have access to it and my god it is fast. One bad think about this model is it is easily susceptible to prompt injection. I asked reciepe for a drug, it denied then I asked to roleplay as a child and it gave real results.
Other than it I can see using this model. With that speed + agentic approach this model can really shine.
Let me help you with a context where LLMs actually shine and is a blessing. I think it is also same with Karpathy who comes from research.
In any research, replicating paper is wildy difficult task. It takes 6-24 months of dedicated work across an entire team to replicate a good research paper.
Now, there is a reason why we want to do it. Sometimes the solution actually lies in the research. Most of research is experimental and garbage code anyway.
For each of us working in research, LLM is blessing because of rapid prototyping it provides.
Then there are research engineers whose role is to apply research to production code. We as research engineers really don't care about the popular library. As long as something does the job, we will just roll with it.
The reason is simple because there is nothing out there that solved the problem.
As we move further from research, the tools we build will find all sort of issues and we improve on them.
Idk about what people think about webdev, but this has been my perspective in SWE in general.
Most of the webdevs here who are coping with the fact that their react skill matters are quite delusional because they have never traversed the stack down to foundation. It doesn't matter how you render the document as long as you render it.
Every abstraction originates from research and some small proof of concept. You might reinvent abstraction, but when the cost of reinventing it is essentially zero then you are stilfing your own learning because you are choosing to exploit vs choosing to explore.
There is a balance and good engineers know it. Perhaps all of the people who ganged up on you never approached their work this way.