Hacker Newsnew | past | comments | ask | show | jobs | submit | tech_ken's commentslogin

Another bingo square for that 'AI is gambling' post (https://news.ycombinator.com/item?id=47428541)

Yes in my (somewhat tinfoil) opinion the point is to have an emotional impact on the workforce overall (or at least, one of the points is). Tech workers had a really good 20 years in the US, and kind of forgot that they were ultimately still wage workers. I think the culture circa 2018 took for granted a basic level of respect and cooperation from upper executives, and were beginning to exercise their power to achieve political goals, which was annoying to the tech ownership class. I think one of the major strategic turns of last 4ish years is the usage of precarity and high turnover to corrode worker solidarity in fields which used to be ironclad and respectable white-collar work. By simultaneously narrowing the hiring window ('junior devs are replaceable with AI') and also expanding the opportunities to be culled ('we are axing this division to cover our moonshot outlays') capital cultivates a desperate and compliant workforce. Bottom-up culture is woke, in the 2020's the folks in power want top-down directives that are followed unquestioningly; similar approach to how the executive branch was brought to heel by DOGE.

Or the current crop of companies has just ossified and are waiting for a disrupter to kill them. You can’t get that big and be around for that long without having the original culture die, it seems. This isn’t the first wave of companies this sort of thing has happened to, is it?

Can't speak for everyone here, but I (as a US citizen) am way less bothered by sports gambling specifically than I am by generalized Kalshi-type gambling being abused by powerful insiders in the federal government (or other institutions). Like yeah I don't think it's great that we've enabled yet another route for young men to completely ruin their lives, but civil liberties, personal responsibility, etc. etc. etc.

What really is scaring me is how transparently the current US executive branch has been basically running a Black Sox scam for the last year or so. This is not something that I think is really happening with eg. Ladbrokes. Seems more like an even more insidious form of insider trading which is already disgustingly prevalent across the whole US political system; except now it's even less traceable, and even easier to exploit for things like military actions.

edit: Like is this kind of stuff already prevalent in places where gambling is legal? https://readwrite.com/threats-israeli-reporter-polymarket/


> You're always going to have some sort of insider leaks, and quite frankly I don't care if they make money off of it in a betting app

Absolutely garbage take, to be quite honest with you. War profiteering is one of the most heinous crimes imaginable, and the last thing we need in this world is more opportunities for it. Regardless of whether the punters getting screwed consented to gambling or not, the problem is the perverse incentives it creates at the highest echelons of power. Abusing access to military intel for profit is foul behavior that will only degrade the quality of our governance and foreign policy, not to mention the literal lives that will be lost as a result.


I don't think paradigm shifts have to be 'better' in some march-toward-progress sense, they can be lateral or even regressive in that way and still lead to longer-horizon improvements.

I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.


On that note, Terence Tao gave a good interview to Dwarkesh Patel talking about Kepler. He pointed out that the previous geocentric models were actually more accurate than Kepler's at the time, in part because they'd had so much complexity piled on to solve minor errors. Kepler's theory was more elegant, but at the time it wasn't necessarily a better model.

I think important paradigm shifts can often look like this - there's not necessarily a reason to expect them to be instantly optimal. Deep Learning vs 'good old-fashioned AI' is another example of this dichotomy; it took a long time for deep learning to establish itself.


I like this a lot. The Innovators Dilemma for science.

The new simpler tool always competes with highly adapted complex tools to get to a region of value generation.

Starting where it’s greater simplicity, despite less complementary adaptations, is of great advantage.

Then slowly accumulates its own version of practical complements that let it excel overall.


> I don't think paradigm shifts have to be 'better'

But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.

> Perhaps we're truly at the End of Science

Who said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.

There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.


> Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model

Empirically it seems that paradigm shifts are more driven by deaths and retirement rather than improved fit to the data. Moreover the way that you reconcile old data with the new model can be contestable; it's not like everyone all at once says "oh this new model is clearly a strict superset of the previous one, time to adopt it". With all that said I think one could argue that this stuff is basically noise and that the process still 'trends toward progress' (and I'd agree). But I would say that the scale of noise can also be quite large relative to things a human might experience in their life. I was sort of imagining social-disruption (like a dark-age type regression) as the 'backwards paradigm shift'.

> but there's likely not going to be major "relativity" moments from here on out

I cannot understand how anyone treat this as something that can be objectively concluded; by definition these kinds of radical paradigm shifts are basically unforeseeable up until they happen. I called it the "End of Science" to draw a parallel to "End of History"-type thinking because both (IMO) take this view of "there will be no more revolutions, only incremental adjustments on an unshakeable core into infinity", which I feel is personally a 'vibes based' assessment of things. It's not even that I disagree with it so much as I feel like the statement is basically (and will always be) a pure guess, one which many people have made and been wrong about in the past.


> I cannot understand how anyone treat this as something that can be objectively concluded

Mostly because the room for the unexplained in physics is really small. It's possible that we end up finding some sort of big revelation about quantum physics that completely changes how we view relativity. But even in that case we are more likely to find that relativity is just a simplification of a more complex model with better explanatory power. Very much like how Newtonian physics still works really well from quite small things to anything most humans will deal with on earth. It's only when you start talking about uncommon experiences in extreme environments where relativity starts being a requirements to make the math work.

> there will be no more revolutions, only incremental adjustments on an unshakeable core into infinity

I guess I'm just more comfortable with that position. A lot of the revolutions in science circled around detecting and measure things previously immeasurable and unsee-able. The study of EMF exploded when it did because that's also when our ability to generate and measure electricity in more than just a party trick happened.

We are at a point where things are more of an unknown unknowns with no theoretical way to observe. The physics models at the fringes are mostly centered around things we can't measure.

There just aren't interactions we can't currently predict. The only one I know about is radioactive decay.

And a lot of this shows in modern society. In physics, the last major paradigm shift was relativity. That's a nearly 100 year old model at this point. Everything we have currently is just incremental improvements on the physics model.

I don't think this is because we just aren't as smart today as we once were. Quiet the opposite. There are far more people on the planet. There are almost certainly a lot more "Einsteins" trying to find a new paradigm and they've simply failed over the decades because it's seemingly increasingly unlikely that there is something to find.


> Empirically it seems that paradigm shifts are more driven by deaths and retirement rather than improved fit to the data.

Indeed, Kuhn's own work acknowledged this.


> It simply means that we can mostly just expect refining rather

The practical issue is if there will enough funds for just "refining", instead of "paradigm shifts", which I understand as new and "exciting" discoveries. I'm not a scientist, of course, this is just my layman's understanding.


My hot take is that mathematical and scientific 'soundness' is ultimately more of an aesthetic preference than an objective quality of reality. Good science makes sense to humans, and 'what makes sense' is ultimately what fits satisfyingly in your brain. There's nothing inherently wrong with an enormous epicycle model of reality from the perspective of the God of Math; so long as your formal system is consistent and expressive enough to represent everything then meh, it's a model. But the model that humans want to elevate to canonical status has far stricter requirements, and ultimately it's the one which the majority of sufficiently credentialed tastemakers decide is 'best'. Parsimony works well in physics where you have closed form expressions for all your stuff, but the biology cases are so much messier because it turns out that sometimes reality isn't parsimonious. All this to say that good science is a matter of taste, and while AI can gist the broad strokes of taste I've yet to see it take on the role of genuine tastemaker.


If biology, or some other subject area, is inherently, irredeemably hard to explain, and always will be, then I don't care about it much, because it doesn't mean very much. I care about explanations, not "reality" in the sense of every arbitrary muddle of knotted nerve fibers and confused flour beetles. If all the world's messy, inexplicable things were to gang up and cause us trouble such that we have to pay them attention, we can still ultimately deal with them in the ways that matter by using clarity and the things we can explain well.

>...nothing inherently wrong with an enormous epicycle model of reality...

That would be pretty hopeless for launching satellites and the like.


What use does the God of Math have for satellites and the like?

Well maybe not much for the God of math but Newtonian mechanics is more practical for life, beyond just matters of taste.

I think you would need to work very hard to prove that the topology you are describing is well-formed enough for this analogy to make sense. For one: "cognitive difficulty" is not really a crisply defined quantity such that expressing it as a function of some input vector makes obvious sense (to me anyways). What's the cognitive difficulty of deciding what to have for dinner? What's the cognitive difficulty of making my 5 year plan? What's the cognitive difficulty of imagining a nice gift to get my wife for her birthday? There are so many things humans do which are heavily 'contingent' (in the sense of having sensitivity to the local culture, history, personal experience, etc) that the idea of being able to assign everything a single, decidable scalar to represent 'difficulty' seems like an extremely tall order to me. And that's setting aside whether the ambient vector space of 'human capabilities' is even really a sensible construct (a proposition that I also doubt quite heavily).

All this to say that describing what's happening as a 'rising tide' seems misleading to me. Techno-sociological development is super messy already, let's not make it more complex by pinning ourselves to inaccurate and potentially misleading analogies. The introduction of the car did not 'push humans higher onto a set of capability peaks', it implied a total reorganization of behavior and technologies (highways, commuting, and suburban sprawl); using the terms of your analogy humans built new landmasses on top of the water.


> I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers

Because labor is the largest line item in almost every software company on Earth. Executives' primary KPI is their market cap, so convincing investors that your profit/expense ratio is going to 2x in 6 months when you finally get full LLM adoption is an excellent way to juice your performance metrics, and thereby your bonus (mutatis mutandis for various finacial setups).


But then what about the fact that it's exposing so many firms to immense risk and essentially straight-up lying to investors as well as product adopters? No one thought of that reality, when the chickens finally come home to roost?


My read is that it's a mix of tech firms having overhired a lot during the ZIRP+COVID era, as well as executives having a pretty short horizon for risk if the potential bonus is large enough.


> executives having a pretty short horizon for risk if the potential bonus is large enough.

This single line explains succinctly what is probably responsible for most of the economic dysfunction of the past 20 years


There's an easy fix, you start hiring again. Probably don't even have to explain it, what happened just before was that you improved corporate finances by lowering your labour expenses, i.e. you're clearly growing and hence you need more people.


That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:

> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.


WOW I've spent years thinking that I suck at typing on phone screens, I never even considered that it might be the keyboard software is just shitty....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: