Hacker Newsnew | past | comments | ask | show | jobs | submit | derangedHorse's commentslogin

In a good autocracy, and a good democracy, people will trust the system will push out the corruption as the right people become privy to it. In a bad autocracy, the people had no power to make the decision and therefore can't even hold each other liable. In a bad democracy, people view their fellow denizen at fault. It all boils down to who holds the power, because then people know who to blame and give less trust to when things go south.

> But your comment very much reads like you think "Iran strikes Israel on March 10" at 25% odds is what in fact caused Iran to strike Israel on March 10

I don't know how you extrapolated that from the parent's comment. It literally said nothing about the cause and effect of this particular event.

Knowing the odds in a prediction market IS a big part of the problem brought up in the linked article though (and the bets themselves). Knowing how much can be made from being right creates an upper-bound on what a financially-rational malicious actor will spend in trying to change the outcome.


Their comment proposed something that would "be the answer here".

What does "here" mean? It's logical to expect "here" to refer to a scenario that includes cases like the one in the article. If it's some scenario that excludes cases like the one in the article, then it's not actually relevant to the discussion.

(Tangents are OK. It's just confusing if they're introduced with phrasing that makes them sound like they're not tangents.)


"here" in that comment is not referring to any specific scenario. It is referring to the problem discussed in the sentence immediately following it, that public prediction markets can shape the outcome of the events they are predicting.

>how you extrapolated that

I wasn't extrapolating - it was the literal meaning of the words. The context was that someone commented "shouldn't be called prediction markets, they should be called outcome-shaping markets" in direct reply to "Polymarket gamblers threaten to kill me over…[the prediction market "Iran strikes Israel on March 10"]. I interpret that as polymarket gamblers outcome-shaped Iran striking Israel. It was at 25% odds when they struck. I don't think the commenter actually meant that literally, which is why I asked them to clarify. I'm just doing my best here.


And what do you think KYC would help with? The threats were made on WhatsApp, not Polymarket.

It would make more sense to campaign for better background checks on WhatsApp. A case can be made that a chat system with discoverable identities should have better safeguards. If the incentive to make a threat is financial gain rather than a pure desire to kill, restricting the means in which a threat can even be made (or identifying the participants) would help silence the noise and give actionable insights to law enforcment.

I’m generally opposed to KYC and similar measures, but if a platform is already collecting massive amounts of user data, it should at least use that data to help protect the people who become vulnerable because of it.


Polymarket is a set of blockchain smart contracts. It's banned in most countries but you can still use it by interacting with the blockchain directly.

WhatsApp would be more direct, but if they're uncooperative then knowing who is betting might provide some clues.

Reminiscent of Jim Bell and his idea for an assassination market: https://en.wikipedia.org/wiki/Jim_Bell

Except this is even crazier because the bets aren't on people dying, it's about a reportable event. Any of Polymarket's silence in voicing whether the outcome was determined by this unwilling participant's writing makes them complicit. When there's a single source of truth, it makes a target for vested parties that doesn't benefit from the security the platform and their employees get from hosting the bet.


> Measurements, metrics and surveillance kill creative work

No, not really. Broadly, it's not "measurements, metrics and surveillance" that kill creativity, it's the inability to make reasonable thresholds for failure. If the threshold is too low, one might never be able to get the critical mass of resources they need to achieve their task. If it's set too high, people will milk resources even when they have no creativity left to give to an unsolved problem.


I initially agreed with a lot of the sentiment that asks "why," but have reframed my opinion. Instead of seeing this as a way to run programs via inference, I'm now seeing this as a way to bootstrap training. Think about the task of classification. If I have an expert system that classifies correctly 80% of the time, now I can embed it into a model and train the model to try to raise the success rate. The lower we can make the cost of training on various tasks, the better it levels the playing field of who can compete in the AI landscape.


The approach here is very bad for training though, because unlike softmax attention, average-hard attention is not differentiable with respect to the keys and queries, and if you try to fix that e.g. with straight-through estimation, the backward pass cannot be sped up in the same way as the forward pass.


Training is ruled out (see peer comment), however you may find this fascinating, somewhat rhymes: https://arxiv.org/abs/2603.10055


I'm sure they wouldn't mind marking it as public domain. MIT is just the go-to license for things like this since it forces other people to notify others it came from an MIT repo if substantial parts of the original repo was used.


Is this perspective implying that the maintainer might be legally culpable because he, the *human*, was trained on the codebase?


Well I'm implying that someone who's been reading a codebase for 10+ years is the worst person to claim an "independent reimplementation".


The context window is quite literally not a transformation of tokens or a "jumbling of bytes," it's the exact tokens themselves. The context actually needs to get passed in on every request but it's abstracted from most LLM users by the chat interface.


> Blanchard's own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.

Ridiculous. I don't want specifications for proprietary APIs to be protected, and I don't want the free ones to be either. The software community seemed pretty certain as a whole that this would be very bad for competition [1].

Morally, I don't think there's anything wrong with re-implementing a technology with the same API as another, or running a test suite from a GPL licensed codebase. The code wasn't stolen, it was capitalized on. Like a business using a GPL code editor to write a new one.

> This is not a restriction on sharing. It is a condition placed on sharing

Also this doesn't make any logical sense. A condition on sharing cannot exist without corresponding restrictions.

[1] https://www.reddit.com/r/Android/comments/mklieg/supreme_cou...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: