It would be useful to predict things like earthquakes and tornados. Gambling on what politicians and celebrities will do is not science it's degenerate court gossip.
Game theory can be applied to these problem, and if you can correctly model reward, cost and motives, you can predict decisions of politicians more often than not.
Even granting the idea that game theory can be applied successfully here; that does not really help with one-off events. Consider, knowing the odds of a coin flip does not grant you any real help in knowing what the next coin flip will be.
This is also ignoring that game theory of partisan games breaks if any of the participants knows what the other will do. Is one of the more famous ideas.
To that end, if you want to predict what someone will do, more often than not you are best looking at their experience doing said thing.
> you can predict decisions of politicians more often than not
What makes you believe this? The performance of economist/sociology experts using game theory to make predictions has been worse than a coin-flip up to this point. It also has done enormous damage.
I don't think Nash envisioned politicians and their aides being able to profit from making decisions. Do you launch an attack on Eastasia? Well the market right now says there's a 40% chance, so I guess it's a good idea to grab your crypto keys and make some bets before you call the Joint Chiefs.
As you can see by downvotes and comments, they still don't get it.
LLMs make developers more efficient. That much is obvious to anyone who isn't blinded by fear.
But people will respond "but you still need developers!" True. You don't need nearly as many, though. In fact, with an LLM in their hands, the poor performers are more of a liability than ever. They'll be let go first.
But even the "smart" developers will be subsumed, as vastly more efficient companies outcompete the ones where they work.
Companies with slop-tolerant architectures will take over every industry. They'll have humans working there. But not many.
They do not. I review a ton of code, and while the quantity is going up, the quality of that code is getting worse. LLMs only make developers more efficient if they skip the due-diligence required to verify its output; they all say they don't, and almost all of them do.
> I doubt hobbyists would describe their hobby as purgatory.
Programmers have become accustomed to a lot of cultural and financial respect for their work. That's about to disappear. How do you think radio actors felt when they were displaced by movies? Or silent film actors when they were displaced by talkies?
> I doubt the laborer would describe their toil as "craft".
Intellectual labor is labor. I'm a laborer in programming and I definitely consider it a craft. I think a lot of people here at HN do.
And they were and are of course right to feel those feelings, but it doesn't change the fact that the world is changing. Rarely do large changes benefit everyone in the world.
> And they were and are of course right to feel those feelings, but it doesn't change the fact that the world is changing. Rarely do large changes benefit everyone in the world.
I'm not sure who you are arguing against. No one here said that the world isn't changing. But it seems to me that the people who are disadvantaged by AI, which is potentially everyone who doesn't own a data center, should take efforts to ensure their continued survival, instead of merely becoming serfs to the ruling oligarchs.
I've been working in security for more than 20 years and have seen the deleterious effects of security through obscurity first-hand. Why does "adversarial engineering" rely on obscurity?
reply