Hacker Newsnew | past | comments | ask | show | jobs | submit | rpdillon's commentslogin

Been running lemonade for some time on my Strix Halo box. It dispatches out to other backends that they include, like diffusion and llama. I actually don't like their combined server, and what I use instead is their llama CPP build for ROCm.

https://github.com/lemonade-sdk/llamacpp-rocm

But I'm not doing anything with images or audio. I get about 50 tokens a second with GPT OSS 120B. As others have pointed out, the NPU is used for low-powered, small models that are "always on", so it's not a huge win for the standard chatbot use case.


Even small NPUs can offload some compute from prefill which can be quite expensive with longer contexts. It's less clear whether they can help directly during decode; that depends on whether they can access memory with good throughput and do dequant+compute internally, like GPUs can. Apple Neural Engine only does INT8 or FP16 MADD ops, so that mostly doesn't help.

That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.

That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.

There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.


Sure. I'm just more optimistic than you are about the enduring value of AI. Time will tell.

That article's premise is that the Android security model is something that I want. It really isn't.

The F-Droid model of having multiple repositories in one app is absolutely perfect because it gives me control (rather than the operating system) over what repositories I decide to add. There is no scenario in which I wish Android to question me on whether I want to install an app from a particular F-Droid repository.


My personal take is that LLMs are so transformative that they are likely not going to qualify under derivative works and therefore GPL wouldn't hold sway. There's already some evidence that courts will consider training on copyrighted material fair use, so long as it is otherwise obtained legally, which would be the case with software licensed under GPL.

I realize this is an unpopular opinion on HN, but I believe it is best because it's a weakener interpretation of copyright law, which is overall a good thing in my view.


You can train models locally now and use open source ones and there's a robust community of people training, retraining, and generally pulling data from anywhere. And then new models get trained on old models. The models in use now are already several generations deep even further trained on code freely given by the entire industry. It's like complaining about being 1/100000th of a soup with no real proof you're even in it. Can you provide proof that a model used your code? It's like a remix of a remix of a remix.

> It's like complaining about being 1/100000th of a soup with no real proof you're even in it.

I love a good analogy, especially one that takes a complex situation in which esoteric, unusual conditions are distilled and related back to common experiences held by the reader, such that all can understand.

Next time I'm a small part of a soup I'll think of this.


The fact that github copilot had an option to block generated code that matched public examples and the fact that the llms can regenerate Harry Potter books verbatim means the training data is definitely "stored in a digital system of retrieval" but Goodluck actually having common sense win vs trillionaire incentive group stealing from everyone

I wonder if that would extend to training LLMs on decompiled firmware. A new clean room method unlocked?

Yesterday morning, CNN:

> In a remarkable 24 hours in Washington, House Republicans snubbed a bipartisan funding deal cut by their own Senate GOP counterparts and instead approved an entirely different plan — prolonging the Department of Homeland Security shutdown.

> Then, they left town.

It's obvious what's happening.

https://lite.cnn.com/2026/03/27/politics/dhs-shutdown-fundin...


Not just bipartisan. That bill was unanimously passed in the Senate.

Negative. It was passed with unanimous consent, there was only maybe five people there. I think that's a big difference between "passed" which gives the connotation that people actually voted on it, "unanimous consent" of the present.

It was also at 2 o'clock in the morning


You make this sound like it was a democrat plot, it was not.

Thune, the republican senate majority leader, was the one that put up the unanimous consent motion.

There were more than just 5 people there. Though it was late at night.

You can't push something through unanimous consent if there's not a quorum. That requires at least 51% of each party to be present.

Now, it's possible they waited until some of the big objectors to the bill fell asleep or left. But, that doesn't really change the fact that Thune pushed this through.


I made no claim as to party, it's just how it was done. If anything it was the Republicans who are the majority. I wanted to clarify that it was by unanimous consent, not a recorded vote.

Fair enough. But I do still have to push back on the notion that it was just 5 people there. If that were the case, you could have expected one of the more lucid members to have done a quorum call.

Fair point. My understanding is that the Senate "assumes" a quorum unless someone suggests there is not. Since it was AFAIK around 2am... my guess is not and they all just wanted to get the heck out of there. Since no recorded vote we may never know. So I stand corrected on the number.

Your understanding is correct. The quorum call has a priority and can be done by any member.

The session has to start with a quorum and it's assumed that there is still a quorum since nobody has done a quorum call.

I have to assume that if someone actually objected to this, they would have done a quorum call before leaving the session. That or the few objectors simply left early not thinking this would go to 2am. Though, they could have always came back. They almost certainly would have had staffers there who'd inform them that something like this was coming up.


But what effects does it have on the legislative process? It sounds like at the very least, all the senators vaguely wanted it to be passed, but didn't want to be on the record for voting for it.

I got this email in mid-March:

> I’m reaching out to start a dialogue regarding a possible non-executive founding directorship role for a new senior care deal.

> We are doing a roll-up. Acquiring and consolidating various senior living revenue generating companies, eliminating redundant operations, improve and expand service offerings and ultimately exit via private sale or IPO.

It was so bonkers I forwarded it to my wife, who works in a research center on aging, and she was like, "how many red flags can you fit into one email?"

This stuff is depressingly real.


I think those are features, not a bug.

You need root to get around all the stuff that Google won't let you do. There's tons of examples I've encountered over the last 20 years, but the one I encountered most recently is that without root, when I plug in an external display to my phone, I can't actually make the phone display go off. So it sits there powering the external display and its own display (that I'm not using) because of permissions.

It's not clear at all that a scammer is on the phone, instructing people to click through every warning that they see while sideloading a malicious app. As I stated up thread, the majority of these scams are happening through apps in the Play Store.

To address your question, there should be a straightforward option during device setup. If you're first attaching your account to the device, you simply check a box that says this is an advanced user's phone. You can put it behind the same kind of scary pop-ups that web browsers have when they're about to serve you an HTTP page, or when the HTTPS certificate is self-signed.

It's the most obvious, straightforward, user-friendly approach, and it was never even discussed.


> the most obvious, straightforward, user-friendly approach, and it was never even discussed

Fwiw, it was "discussed" in the sense that the person we're arguing with meant upthread ("let's discuss a good solution instead of this boring repetitive outrage"), but it's not like Google listens to that so any such discussion is pointless anyway. It is indeed the obvious solution and it comes up in each of these threads, but believers like GP can always be new rationalizations of why Google doesn't implement one proposal or another


> It's not clear at all that a scammer is on the phone, instructing people to click through every warning that they see while sideloading a malicious app.

Google claims this to be a very common or majority attack vector.

"The Global Scam Report also found that scams were most often initiated by sending scam links via various messaging platforms to get users to install malicious apps and very often paired with a phone call posing to be from a valid entity."

https://security.googleblog.com/2024/02/piloting-new-ways-to...

> If you're first attaching your account to the device, you simply check a box that says this is an advanced user's phone.

I completely agree this is a perfectly valid solution but what about those who already setup their device? The security of the checkbox only works if you click it before someone attempts to scam you.


All they say is that the apps are malicious, though. The majority of malicious apps distributed on Android are through the Play Store. I really wish they would provide concrete details here because I just don't believe that this is all hinging on sideloading.

You don't need to side load a specific app with malware. All you do is tell the person to go to the Google Play Store and install any Anydesk. Heck, even the reviews for that app point out that people that are scamming you often tell you to install it. Kelly Walters' review from '23 has 215,000 upvotes for warning people about this.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: