Hacker Newsnew | past | comments | ask | show | jobs | submit | superkuh's commentslogin

The comments there note there is no official Ubuntu MATE release for the first time since Ubuntu 15 (and before 14.04 gnome2 was an option). That's a shame but probably most people who chose MATE (or gnome2) no longer chose Ubuntu due to the conflicting ideologies inherent in the two. MATE users generally don't like change for change's sake.

its in the daily builds. I haven't tried it yet.

not sure if this confirms the impression you have there... I wasn't like this until a couple of headless VPS'es (on Arm8) got through the upgrade from 18.x -> 20.x -> 22.x and then crashed out over -> 24.x for a still unknown reason. now I'm just afraid .. or I should say reluctant ..to repeat that whole fiasco.

https://cdimage.ubuntu.com/ubuntu-mate/daily-live/current/


There were some issues with how the menu icon manager handled the new security policy defaults. This means the editor will break, and the displayed menu may be missing any item that didn't follow the naming convention syntax. Its a lot of packages to bring into compliance, for that one silly feature the devs had to put in before it was ready...

Maybe they fixed it since the rc release, but there were some rough edges in Feb... the kernel USB support cooked the thumb drive partition structure.

In 22.04 to 24.04 the kernel Nvidia GPU driver EOL abandonment began... In 26.04 people will discover most EOL hardware support prior to RTX series will be difficult to bring up.

Probably wise to wait a few weeks for the bug reports to clear out a bit. =3


I asked it to draw the step by step folding and taping guide for making a tetrahedral hot air balloon envelope (tetroon) from a rectangular sheet and it failed completely.

Ah. So it's a scalper situation where an unethetical entity buys up all the supply and then resells it for a greater price.

Amazon isn't buying and reselling Trainium chips, those are their in house developed custom chips.

Palantir is a more dangerous enemy of the USA (or any people of Earth) than, say, Iran.

There's no point in addressing their propaganda, but the idea that that a draft leads to everyone from every class of life having to be involved in war equally is so obviously untrue it's a joke.


The people of Hackers News aren't typical and usually have the latest and greatest when it comes to computer hardware and the latest software. Unlike everyone else on earth that doesn't care about such things and often runs old hardware and software and so encounter, and are blocked by, cloudflare's computational paywalls more often than a bleeding edge tech user would imagine.

Luckily almost all modern corporate tracking is done through javascript execution + cookies. The days of parsing actual webserver logs are over for the most part. After all, it's only the browsers that execute javascript code and provide profitable personal information about the human behind the browser that matter. People with JS off are not providing sellable information and therefore classified and treated as if they were bots.

Turning off JS by default and temp-whitelisting only mitigates most of this tracking.


The issue is, even with all the browser protections, you still create an account anywhere or buy something an input your name/email address/shipping address, your "hashed data" immediately gets sent to meta/google as a conversion with "this guy bought a cat toy", and you start getting ads for cat related stuff everywhere.

They don't even need to "track" you properly for this stuff to work and it seems there's no way to escape it.


I don't experience that though I have friends who use smartphones who describe it. So I think a lot of it is via javascript. I doubt every retailer, or even a significant fraction, has their backend sending that type of data to $megacorp. But maybe I'm just lucky or shop weird places or it's because I use a new email address @superkuh.com for every account sign up. Or maybe I'm just not seeing the targeted ads for my $superkuhprofile that do exist because I have almost all ads successfully blocked. Perfect is the enemy of good anyway, all mitigations help a bit. And blocking JS is a huge mitigation.

If those companies are using big SaaS companies for eCommerce and have not going "Don't Track" part of their admin panel to turn off tracking, a lot of those SaaS companies will just sell off the data.

So sure, cat toy small time retailer on Etsy won't but credit card processor or shipper might.


I think part of the issue is that these retailers are also customers of meta/google on the side of purchasing ads, and as a merchant you're highly encouraged to send as much data on your events as you can, or your conversion tracking can be "less accurate"and your campaigns are less efficient.

So it's less about "we're sending the data to $megacorp" and more about "I want the most bang for buck on my own campaigns" when the decision is made.

Using a different email certainly helps, though!

EDIT: highly encouraged by meta et. al! Whether this is a legitimate request to improve results or pure self-interest on the part of meta I don't know!


We look at 2 examples of third-party HTTP cookies and 1 example of javascript. It's both, you have to defend on a complex terrain.

It really does. But also, having to do this points out a glaring flaw in the design of the fediverse websites. They're applications and not documents. They require executing complex code from unknown third parties just to show a bit of text and some multi-media. This isn't needed at all. And it wasn't like this till mastodon v3 when they broke it.

Despite requiring Javascript execution mastodon actually does have the post contents of a URL in the hidden meta-content HTML header on the page where it scolds you and blocks you for not executing their arbitrary code. All they'd have to do is put that same text in the HTML as actual <p> text. And it's not just mastodon instances, the other fediverse "applications' are just as silly in their intentional breaking of accessibility for no reason.


At least Xitter has Nitter proxies after they went full Javascript - which is also great since it allows accessing content that's often behind a registration wall.

I have yet to find a social network which is actually accessible. The Google thing (circles?) was never actually useable, it was the biggest horror show of all. m.facebook.com was basically the only website that was ever really accessible. All the other players, including the "free and morally superior" alternatives couldn't give two fucks about people with disabilities, which reflects nicely on the fact that they are actually not an alternative, they are a playground for misguided developers...

Fact is, if you are launching a social network which is not accessible from the get go, you are part of the problem. You have no moral high ground, you're just playing around and widening the digital divide, leaving people behind.


These kind of write-ups all have an implicit premise that is unstated: they're talking about corporate AI run by corporations. They're not actually talking about the technology. Corporate AI will never be ethical or safe because corporate persons have different motivations and profit incentives driving them than human persons do. And most of the time they're quite nasty when viewed through the lens of human ethics.

It reminds me of the parable of the blind monks each feeling a different part of the elephant and arguing about it's shape. They're each not wrong, but they're also only talking about a limited subset of the elephant (AI).

Cory Doctorow is much more eloquent in his explaination of this important distinction in his reverse centaur metaphor.


AMD hasn't signaled in behavior or words that they're going to actually support ROCm on $specificdevice for more than 4-5 years after release. Sometimes it's as little as the high 3.x years for shrinks like the consumer AMD RX 580. And often the ROCm support for consumer devices isn't out until a year after release, further cutting into that window.

Meanwhile nvidia just dropped CUDA/driver support for 1xxx series cards from their most recent drivers this year.

For me ROCm's mayfly lifetime is a dealbreaker.


Last year, AMD ran a GitHub poll for ROCm complaints and received more than 1,000 responses. Many were around supporting older hardware, which is today supported either by AMD or by the community, and one year on, all 1,000 complaints have been addressed, Elangovan said. AMD has a team going through GitHub complaints, but Elangovan continues to encourage developers to reach out on X where he’s always happy to listen.

Seems like they're making some effort in that direction at least. If you have specific concerns, maybe try hitting up Anush Elangovan on Twitter?


> or by the community

Hmmm


Is it really that short? This support matrix shows ROCm 7.2.1 supporting quite old generations of GPUs, going back at least five or six years. I consider longevity important, too, but if they're actively supporting stuff released in 2020 (CDNA), I can't fault them too much. With open drivers on Linux, where all the real AI work is happening, I feel like this is a better longevity story than nvidia...where you're dependent on nvidia for kernel drivers in addition to CUDA.

https://rocm.docs.amd.com/en/latest/compatibility/compatibil...


You missed the note at the top "GPUs listed in the following table support compute workloads (no display information or graphics)". It doesn't mean that all CDNA or RDNA2 cards are supported. That table is very is very misleading it's for enterprise compute cards only - AMD Instinct and AMD Radeon Pro series. For actual consumer GPUs list is much worse https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/in... , more or less 9000 and select 7000 series. Not even all of the 7000 series.

I think that speaks to them not understanding at the time the opportunity they were missing out on by not shipping a CUDA-like thing to everyone, including consumer tech. The question is what'll it look like in a few years now that they do understand AI is the biggest part of the GPU industry.

I suspect, given AMDs relative openness vs. nvidia, even consumer-level stuff released today will end up with a longer useful life than current nvidia stuff.

I could be wrong, of course. I've taken the gamble...the last nvidia GPU I bought was a 3070 several years ago. Everything recent has been AMD. It's half the price for nearly competitive performance and VRAM. If that bet turns out wrong, I'll just upgrade a little sooner and still probably end up ahead. But, I think/hope openness will win.

Also, nvidia graphics drivers on Linux are a pain in the ass that I didn't want to keep dealing with. I decided it wasn't worth the hassle, even if they're better on some metrics. I've been able to run everything I've tried on an AMD Strix Halo and an old Radeon Pro V620 (not great, but cheap, compared to other 32GB GPUs and still supported by current ROCm).


ROCm is open source and TheRock is community maintained, and in a minute the first Linux distro will have native in-tree builds. It will be supported for the foreseeable future due to AMDs open development approach.

It is Nvidia that has the track record of closed drivers and insisting on doing all software dev without community improvements to expected results.


> expected results

The defacto GPU compute platform? With the best featureset?


And the worst privacy, transparency, and FOSS integration due to their insistence on a heavily proprietary stack.

Also pretty hard to beat a Strix Halo right now in TPS for the money and power consumption.

Even that aside there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to.


> And the worst privacy, transparency, and FOSS integration due to their insistence on a heavily proprietary stack.

The market doesn't care about any of that. The consumer market doesn't care, and the commercial market definitely does not. The consumer market wants the most Fortnite frames per second per dollar. The commercial market cares about how much compute they can do per watt, per slot.

> there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to.

The four percent share of the datacenter market and five percent of the desktop GPU market say (very strongly) otherwise.

I have a 100% AMD system in front of me so I'm hardly an NVIDIA fanboy, but you thinking you represent the market is pretty nuts.


I did not claim to represent the market as a whole, but I feel I likely represent a significant enough segment of it that AMD is going to be just fine.

I think local power efficient LLMs are going to make those datacenter numbers less relevant in the long run.


I was thinking to get 2x r9700 for a home workstation (mostly inference). It is much cheaper than a similar nvidia build. But still not sure if good value or more trouble.

I own a single R9700 for the same reason you mentioned, looking into getting a second one. Was a lot of fiddling to get working on arch but RDNA4 and ROCm have come a long way. Every once in a while arch package updates break things but that’s not exclusive to ROCm.

LLM’s run great on it, it’s happily running gemma4 31b at the moment and I’m quite impressed. For the amount of VRAM you get it’s hard to beat, apart from the Intel cards maybe. But the driver support doesn’t seem to be that great there either.

Had some trouble with running comfyui, but it’s not my main use case, so I did not spent a lot of time figuring that out yet


Thanks for the answer. Brings my hope up. Looking in my local shops, I can get 3 cards for the price of one 5090.

May I ask, what kind of tok/s you are getting with the r9700? I assume you got it fully in vram?


Stock install, no tuning.

  $uname -r
  6.8.0-107-generic
  $ollama --version
  ollama version is 0.20.2
  $ollama run "gemma4:31b" --verbose "write fizzbuzz in python."
  [...]
  total duration:       45.141599637s
  load duration:        143.633498ms
  prompt eval count:    21 token(s)
  prompt eval duration: 48.047609ms
  prompt eval rate:     437.07 tokens/s
  eval count:           1057 token(s)
  eval duration:        44.676612241s
  eval rate:            23.66 tokens/s

I have a dual R9700 machine, with both cards on PCIe gen4 x8 slots. The 256bit GDDR6 memory bandwidth is the main limiting factor and makes dense models above 9b fairly slow.

The model that is currently loaded full time for all workloads on this machine is Unsloth's Q3_K_M quant of Qwen 3.5 122b, which has 10b active parameters. With almost no context usage it will generate 59 tok/sec. At 10,000 input tokens it will prefill at about 1500 tok/sec and generate at 51 tok/sec. At 110,000 input tokens it will prefill at about 950 tok/sec and generate at 30 tok/sec.

Smaller MoE models with 3b active will push 70 tok/sec at 10,000 context. Dense models like Qwen 3.5 27b and Devstral Small 2 at 24b will only generate at around 13 - 15 tok/sec with 10,000 context.

This is all on llama.cpp with the Vulkan backend. I didn't get to far in testing / using anything that requires ROCm because there is an outstanding ROCm bug where the GPU clock stays at 100% (and drawing like 60 watts) even when the model is not processing anything. The issue is now closed but multiple commenters indicate it is still a problem. Using the Vulkan backend my per-card idle draw is between 1 and 2 watts with the display outputs shut down and no kernel frame buffer.


Talking to friends who have fought more homelab battles than I ever will, my sense is that (1) AMD has done a better job with RDNA4 than the past generations, and (2) it seems very workload-dependent whether AMD consumer gear is "good value", "more trouble", or both at the same time.

Edit: I misread the "2x r9700" as "2 rx9700" which differs from the topic of this comment (about RNDA4 consumer SKUs). I'll keep my comment up, but anyone looking to get Radeon PRO cards can (should?) disregard.


Given RDNA3 was a pathetic joke, it wouldn't be hard for them to do a better job.

I have this setup, with 2x 32Gb cards. It's perfect for my needs, and cheaper than anything comparable from NV.

I have 2 of them. I would advise against if you want to run things like vllm. I have had the cards for months and I still have not been able to create a uv env with trl and vllm. For vllm, it’s works fine in docker for some models. With one gpu, gpt-oss 20b decoding at a cumulative 600-800tps with 32 concurrent requests depending on context length but I was getting trash performance out of qwen3.5 and Gemma4

If I were to do it again, I’d probably just get a dgx spark. I don’t think it’s been worth the hassle.


FWIW I’m in love with my Asus GX10 and have been learning CUDA on it while playing with vllm and such. Qwen3.5 122B A10 at ~50tps is quite neat.

But do beware, it’s weird hardware and not really Blackwell. We are only just starting to squeeze full performance out of SM12.1 lately!


The splist CDNA/RDNA architecture is a problem for AMD. The upcoming unified UDMA will solve the issue.

Driver support eats directly into driver development

Automatic registration means young adults will not have the consciously confront the possibility. This will certainly decrease the number of people establishing the paper trail that they are contentious objectors.



Conscientious objectors represent an exceptionally small proportion of the population as it is.


The overwhelming majority of registrations are automatic through drivers licensing. It's well proven that a significant portion of men who are registered don't even know there is selective service


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: