Hacker Newsnew | past | comments | ask | show | jobs | submit | Lerc's commentslogin

How can you hope for anything better if you consider it an us versus them situation? When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?

It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.

What are the suggestions for something better? I don't see a lot.

I'd like to see more suggestions of how things could work.

For example:

The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.


> When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?

The response is "we don't believe you" because their actions show that they are hellbent on accelerating inequality using AI and they have offered absolutely no concrete plan or halfway convincing explanation of how, if their own predictions of AI's future capabilities are correct, we're supposed to go from here and now to a future that isn't extremely dark for the vast majority of humans on Earth (to the extent that said humans continue to exist).

The work they have done in this direction so far is not serious, so it's not taken seriously. They obviously care much more about enriching themselves than slowing or reversing current trends.

If they want to be taken seriously, maybe they should start acting like they're serious about anything besides their own wealth and power. And I do mean acting---they need to show us through their actions that they are serious.


We can look at their actions, in particular their efforts to influence public policy.

Seriously. They can say they want to share their gains all they want, but I don't see them spending any lobbying money on things like universal income (and if Altman can afford to lobby for age verification laws he can certainly afford to lobby for things that actually benefit society). The reality is they don't lobby for anything that would take wealth away from them, and any redistribution of wealth (such as a s 75% tax rate) would by definition take wealth away from them.

You can, but then what? Do you judge what they say as if their perspective is the same as yours, and then conclude from that context that what they suggest could only come from an evil person. That seems to be what a lot of people do. What if they actually think what they are suggesting is the best thing for the world? How can you tell what is in their minds?

Alternately you could criticise their arguments instead of the people, and suggest an alternative.

I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.


Judge people by actions not what they say.

You are arguing the opposite, that we should judge by what they say and not what they do?


The problem is that people have a million stories to explain the observed actions, most of those stories are bullshit, and people repeating them know fuck all about the decision-space in which these actions were chosen and taken.

Hm. I guess we can't possibly judge the guy who threw the molotov cocktail. He could have been clearing a wasps nest.

This is a accidentally good example, we don't know what motivated him, while your ridiculous reason is unsound because it would be also a bad thing to do if he were clearing a wasps nest on someone else's property in the middle of the night.

I suspect that they are not a bad person but someone radicalised by the media they consume.

Firebombing someone's house is a bad thing to do. It doesn't mean they are necessarily a bad person. Anger and confusion can make good people do bad things.


I don't care if Altman is secretly a good person. I care very deeply that he is taking actions to harm the world in grievous ways and is not doing any visible thing to mitigate the extreme damage he will do.

"Altman is secretly a good guy" doesn't pay people's mortgages.


Judge their actions, consider what they say as given in good faith and praise or criticize.

To judge the people is to pretend you know why they did or said something.


The idea that we cannot possibly use people's actions to judge them is ridiculous. Musk thinks that the world would be a better place if the races were separated and if all charitable giving was ended. I think that's monstrous.

Why is OpenAI not a nonprofit anymore?


The billionaires could start to earn trust by lobbying for laws and programs that help the poor and displaced. Put money in to retraining programs to help people who lose their jobs. So far they seem to be doing the opposite, CEOs are publicly declaring ‘fuck you, got mine’ and leaving it at that.

Nick Hanauer has lobbied for higher minimum wages.

Michael Bloomberg has lobbied for healthcare.

Pierre Omidyar has spent about a billion on economic advancement non-profits

Gates Foundation - Bunch of stuff.

Warren Buffet - Too much to count

George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.

Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies

A large number Advocate for a Universal Basic Income.

More advocate for things that they clearly think are good things for the world, even if you, personally do not.

Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)

Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.

Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.

I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.


>Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.

People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.


Please consider your biases. Musk could not have “changed” the government if the DNC didn’t hand it to Trump on a platter. Republicans took over because serious people had had enough with the DNC’s full-throated embrace of two things: race-based selection (with the unpopular Harris’s undemocratic coronation as the flagship example), and the relentless focus on trans ideology (to the point anyone not endorsing the fullest embrace of that idea has been declared equivalent to the worst racist). Without that, Democrats would have remained a powerful and relevant party and Musk would have gotten nothing he wanted.

>How can you hope for anything better if you consider it an us versus them situation?

Because it IS an us vs them situation.

They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).

The media organizations they own are constantly telling these stories because it protects them.

>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed

Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.

https://finance.yahoo.com/news/bill-gates-wants-tax-robots-2...

When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.

They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.


Your arguments makes it impossible to prove that the wealthy are not bad.

You interpret every signal as saying the same thing. That makes an unfalsifiable claim.

https://en.wikipedia.org/wiki/Falsifiability


Thats coz my statement wasnt intended to be scientific proof of anything it was an explanation as to the function of the propaganda that got recycled through you and the intent behind it.

If you're prepared to do that you don't even need to run any benchmark. You can just print up the sheets with scores you like.

There if a presumption with benchmark scores that the score is only valid if the benchmark were properly applied. An AI that figures out how to reward hack represents a result not within the bounds of measurement, but still interesting, and necessitates a new benchmark.

Just saying 'Done it!' is not reward hacking. It is just a lie. Most data is analysed under the presumption that it is not a lie. If it turns out to be a lie the analysis can be discarded. Showing something is a lie has value. Showing that lying exists (which appears to be the level this publication is at) is uninformative. All measurements may be wrong, this comes as news to no-one.


I think the point of the paper is to prod benchmark authors to at least try to make them a little more secure and hard to hack... Especially as AI is getting smart enough to unintentionally hack the evaluation environments itself, when that is not the authors intent.

The Mac classic was about as pure as you could get from an architectural point of view.

A 1 bit framebuffer and a CPU gets you most of what the machine can do.

Most of the quirk abuse of 8-bit machines came from features that were provided with limitations. Sprites, but only 8 of them, colours but only 2 in any 8x8 cell. Multicolour but only in one of two palettes and you'll hate both.

Almost all of the hacks were to get around the limitations of the features.

I don't know if the decision apple made was specifically with future machines in mind. It certainly would have been a headache to make new machines 5 generations down the track if the first one had player missile graphics.


They absolutely knew that they were making a platform that needed as much hardware independence as possible. The 512k was already in development before they even finished the original, and they had the experience of the Apple II, which for all of Wozniak’s legendary work, was a dead-end because it relied too heavily on hardware hacks.

A Mac 512K of sorts was already built before the Macintosh introduction at the 1984 shareholders meeting — the demo wouldn't run in 128K of RAM.

The difference is that the quantity of what is being supplied is a factor with supply of oil/gold/grain/etc.

For mining it is just necessary that it happens.

The amount of work in mining is way higher than is required to prevent another party from being able to overwhelm the Blockchain. It is that high because of the subsidy of the mining reward means if Bitcoin has a high value the reward is worth a lot.

This is factored in with the halving of the reward. Either the price will increase exponentially or the mining reward will drop. Causing mining to reduce to those who can be profitable from fees. Which rewards those who can mine most efficiently, it becomes a supply and demand calculation in a market where there are relatively low barriers for competitors.


> The amount of work in mining is way higher than is required to prevent another party from being able to overwhelm the Blockchain.

Isn’t that exactly the point? Bitcoin incentivized wasting resources. It is, according to your own comment, unnecessary to use so much computing to keep bitcoin going. But it’s being used.


The level to be secure is much lower that.

If Bitcoin were worth much less the network would still be secure even though the mining reward would only be enough to pay for a fraction of the current processing.

If Bitcoin does not double in value every four years, the mining reward will reduce in real world terms.

Claiming the mining resources required will be at the current level or higher perpetually requires also making the claim that you think that the value will increase exponentially forever.

Nothing increases exponentially forever.


A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION

—IBM internal training, 1979

It took me a while to realise that the premise is saying the same thing as the reason why we have so many "Computer says no" experiences today.

The conclusion only follows if you want someone to be accountable.

If you want to avoid being accountable, computers should make all management decisions. This has nothing to do with AI other than it provides another mechanism to do that.

People saying "I'd love to help you but the computer won't let me do that" has been happening for years now.

Websites develop abusive patterns because A/B testing lets a process decide based on the goal you want, It doesn't measure the repercussions so you have made no decision to allow them.

Management read it as

A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

THEREFORE THERE CAN BE NO LIABILITY IF COMPUTERS MAKE ALL MANAGEMENT DECISIONS


You're misinterpreting the implication. A better phrasing might be:

A computer can never be held accountable. Therefore, since all management decisions must have accountability, a computer must never make them.


Since when are (human) managers accountable?

You've never seen a manager get fired or decide to "spend more time with their family" ?

No, not really. I only saw them promoted or quitting for an even better management job.


I liked this as much as;

Selective Study Confirms Already Held Prejudice.

It makes a good companion to;

Outlier Study Upends Conventional Wisdom.


>AIs are not human and therefore their output is a human authored contribution and only human authored things are covered by copyright.

That is a non sequitur. Also, I'm not sure if copyright applies to humans, or persons (not that I have encountered particularly creative corporations, but Taranaki Maunga has been known for large scale decorative works)


Copyright applies to legal persons, that's why corporations can have copyright at all.

A "large scale decorative work" is the strangest euphemism for a dormant volcano I've ever heard.

Well obviously it's not doing any decorating right at the moment.

That doesn't say much other than the rules are over in section 15.

To be protected they not only have to publish their security protocol, but adhere to it.

That's not just 'providing a PDF'

That particular section is entirely appropriate. A company can't do everything necessary to prevent every bad thing. They should do everything that they reasonably can. Someone else should decide what is reasonable.

The regulators are saying we've decided the what you have to do to be considered to have done all you could to be safe. Follow those rules, tell us how you've followed those rules, and if something bad happens and we find out that you didn't follow the rules you said we're going to nail you to the wall.

This hinges on Section 15. Which I think is inadequate because it does not meet the criteria of someone else deciding what is reasonable. Publishing their safety plans and adhering to them should be enough to grant protection from liability of harm directly to users, since the publication give individuals the ability to make an informed decision, provided they have done the safety work that they have said, a user deciding that is sufficient for them and choosing to use it should be allowable.

That should not extend to harm done to others. They don't get to choose. Consequently the standard required to be protected against claims of negligence has to be decided by a third party (experts hired by regulators ideally).

Blanket liability and blanket indemnity both go too far.

If someone makes a YoYo that blow's someone up because they made it out of explosives then they should be held liable.

If someone makes a YoYo that blow's up a city because it contained particles unknown and undetectable to any science we have, they shouldn't be to blame.

The key is that they have to have done what we think is required. Legislators get to decide what it is that is required. If a company does all of that, then they shouldn't be held responsible, because they have done all they were asked to do.

The problem is not that a law provides indemnity, the problem is that it sets the standard to qualify too low.


You may like https://c50.fingswotidun.com/

It's what I doodle with to generate images using a stack based program per pixel.

Every character is a stack operation, you have 50 characters to make something special.


That's pretty neat; some of output are beautiful!

Mine is also pixel coloring at the lowest level. I have a shading kernel in GPU doing the low level work, mainly applying colors recursively like fractal. I got sick of writing shader code so I make a high level language supporting math operations in concise expression that are compiled to shader code in GPU. The main thing is it supports functions. That let me reuse code and build up abstractions. E.g once I get the "ring" pattern settled, it's defined as a function and I can use it in other places, combine with other functions, and have it be called by other functions.

One of these days when I get some time, I'll formalize it and publish it.


What was the reasoning behind that? Were there specific features of that inductor that led them to choose it, or did they choose it and then found some of their design relied on atypical generic inductor behaviour.

The problem with going off design sheet is you don't know what might change. There's usually a good chance that you are not depending on the difference, but it's the not knowing that gets to you.


They are suggesting bypassing the RP2350's internal switching regulator (which only needs an external coil and some caps) and replacing it with an external linear regulator (which is actually supported by the datasheet)

Switching regulators have much lower power draw (which is important when running off batteries) and generate less heat, which sometimes leads to a more compact footprint (though I'm not sure the RP2350's core uses enough power for that benefit to kick in)

The power/heat savings don't really matter for this usecase, and linear regulators have the advantage of producing more stable power, though you are hardwiring it to 1.2v (a small overvolt) rather than using the ability of the internal regulator to adjust its voltage on the fly (adjustable from 0.55c to 3.30v)


Exactly this. Also having the regulator off chip reduces the heat a teeny bit.

Sounds more like a cool tip than a hot tip to me in that case ;)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: