It is generally "fine" but as a life long mac user from the 68k days, what it isn't is up to the standards Apple used to hold themselves to. macOS has over the last handful of years especially become something of a "death by a thousand paper cuts" experience. And I think the problem Apple is facing is Tahoe is such a fundamental UI change (and no one likes those, just go back and read up on reactions to the original OS X UI) that people are paying more attention to the flaws. The noise and inconsistencies of the menu icons, the "last 20%" cases where the liquid glass UI is actually pretty broken (drop down a long list of possible wifi networks with a window with a white background behind the list), the places where the UI just seems to fail to update until some background thread finally gets around to it. The fact that these are nits, but that the nits have really been adding up over the years is starting the wear thin. Apple has always been a corporation of cycles, and things have gone bad before and then gotten better. But years and years ago, the degree of attention to detail that Apple (usually) put into their software and products was the sort of thing you could point to and demonstrate that for whatever other flaws the system might have, the attention to detail really helped make the whole experience just better. These days, while they do sometimes still get it right, it does feel like there's a lot of software design decisions made by "warm bodies" and not, as the article puts it, people who "bleed six colors". Tahoe is the first time in decades of using a mac that I've actually wanted Apple to take a step back and seriously just spend time fixing the bugs. And I daily drove OS X beta, so my tolerance for buggy software in the face of incredible potential is really high.
> The big idea with Linux/BSD/fully-open-source is that you can fix whatever you don't like.
That's a great theory, and sometimes it's actually true, but in reality for most users most of the time, Linux is as "fixable" as Windows or macOS, because most people, even the technically savvy ones aren't driver developers. Heck most software developers probably aren't even C programmers anymore. And even if someone had the competency in the language and low level system programming, do they have the time and the inclination to re-write the audio stack so that it finally works correctly? Or to fix the fact that even in 2026, sleep and hibernate are hit and miss? And then to maintain their patch against future system updates or go through the process of getting it upstreamed?
Most Linux users, and especially most Linux users switching from something like macOS or Windows would be waiting and hoping that someone else decided to fix the thing for them because they either lack the skills, time or inclination to do it themselves. And we know this is true because if it weren't true, all the various "wars" over the years like systemd and pulse audio and wayland wouldn't have been a war at all because everyone who didn't like it would have easily patched it out and moved on. But a modern full fledged OS experience is a mess of intertwined and complex dependencies. So when a distro decides to switch a big chunk of the underlying stack like that, most people either have to go along with it, or hope that enough people feel strongly enough about it to fork everything and make their own distro, and then they have to hope the forkers have the passion and drive to maintain that for them.
Yes, you "can" fix whatever you don't like in linux. Just like you "can" find all the information you need to diagnose and treat whatever medical condition you might have online and at your local libraries. But most people are still going to pay a doctor, because most people don't have the time or skills to actually do it.
> but in reality for most users most of the time, Linux is as "fixable" as Windows or macOS,
I disagree with this. For most users, most of the time, Linux is significantly more fixable than Windows or MacOS.
In nearly 20 years, I've never had to write a line of C or touch the Linux kernel to fix issues I've had on Linux.
For example, one of my big peeves I've had lately on both PopOS and MacOS are the looooong animations to switch desktops.
On PopOS, I had two paths to fix this: Tweak the COSMIC desktop to fix the behavior, or the simple thing of simply installing GNOME (or KDE or any other DE of choice).
On MacOS, I'm SOL. There's no way to fix that on my Macbook (short of installing Asahi Linux, of course).
> Just like you "can" find all the information you need to diagnose and treat whatever medical condition you might have online and at your local libraries. But most people are still going to pay a doctor, because most people don't have the time or skills to actually do it.
This isn't a great analogy, but it's worth noting: Many conditions are expected to be self-diagnosed and self-treated. I don't go to the doctor for scrapes, bruises, colds, dry eyes, a stubbed toe, etc. By this analogy, Linux users are buying their own aspirin and applying their own band-aids, while MacOS users are waiting in line, dependent on someone else to fix these things.
I say this as someone who uses both MacOS and Linux daily.
> On PopOS, I had two paths to fix this: Tweak the COSMIC desktop to fix the behavior, or the simple thing of simply installing GNOME (or KDE or any other DE of choice).
So what did you do? Did you fix the DE? Again, this is effectively outside the skill of the sorts of people who would be "switching" to linux due to the issues with macOS or Windows.
And while installing a new DE is certainly easier than re-programming one, it's still dependent on someone else having written a DE that not only solves your problem, but doesn't introduce entirely new ones and isn't so fundamentally different to the user that they might as well have switched OSes in the first place. And if the user's primary issue was being forced into a major interface re-design like liquid glass, having to switch to a completely new DE is more of a lateral move than actually fixing the problem.
And to be clear, the fact that it's POSSIBLE for someone to fix a problem for you even if you can't, and it doesn't have to be the primary OS vendor is a benefit of using an open source OS. So I'm not saying it's not possible to benefit from this. I'm just saying that for most users, most of the time, the ability to "fix it themselves" is effectively as out of reach for them as it is using macOS or Windows because having access to the source code is only the tiniest part of actually fixing a problem for themselves.
Since my doctor analogy fell flat, let me try again with a traditional car analogy. A kit car is infinitely more open, customizable and user controllable than any car bought from an auto manufacturer. And yet, for the vast majority of drivers, buying a kit car, even if it was turn key and pre-built would do absolutely nothing to make it more likely that they will do their own repairs or modifications to the car. They will continue taking it to the same mechanics they always took their traditional cars to, they will continue to buy off the shelf parts if possible and do without if not.
Does it matter? Generally Linux desktop distributions are made for the people who use them, who would tend towards people who will fix things. You mention distros but there obviously are a lot of passionate distro makers because right now it seems like there are more distros than ever.
There are often comments on threads like this that go along the lines of "If only the people making Linux desktop did X then they'd get more people". But there there isn't really anyone making Linux on the desktop. It's not a product. Even the products within it are built on the work of people with very disparate interests. It's kind of amazing that we get a cobbled together working experience at all.
Apple and Microsoft can focus on particular things, like getting more users, or supporting hardware they want to sell, or trying to get you to sign up to Office 365. No Linux desktop environment can have that kind of focus. So when you say it's not fixable to most users I think: well it's not supposed to be. It's not supposed to be anything, it just kind of is. Like coming across a mountain instead of a theme park - it's not a curated experience, it's not going to be for everyone, you might get hurt, but it's far far more beautiful.
It does matter if you're selling someone on the idea of switching away from their mac or windows machine that they're complaining about something the OS vendor has done by highlighting that with Linux they could "fix it themselves". It misses the point that most people don't want to "fix it themselves" and even if they had the inclination to that, for many problems they don't have the time or the skills. If someone is upset that Apple forced a move to Liquid Glass with Tahoe and all the bad UX that comes along with it, it's possible that they could also have the skills to fix their OS if they were equally upset that their chosen linux distro switched to Wayland. But it's more likely than not that they don't have those skills and so for that user, Linux is theoretically an OS they can fix, and practically just as likely to force them to accept the march of technology as any other OS they use.
I personally wouldn't try to sell Linux to anyone and get them to switch. It is a futile game and I see no real reason for it. People will move if they have reason to (in any direction) and the best one can do is show and tell. I will tell people what I like using if they ask. I'm more likely to tell folks not to switch because I don't want to be technical support for anyone outside my household.
I don't think anyone will switch from MacOS to Linux because of rounded corners. If they're really into theming it would make sense.
Being able to fix things is also a bit of a vague statement. You can fix things in many different ways, and you can fix some things in every OS. Fixing might be writing your own code, or switching a theme, or an application, or a distro, or the whole OS. The level of lockdown then matters. MacOS has the greatest lockdown because you can't just get a new Macbook and fix it by installing something other than MacOS.
Writing objective-c code for mac os GUI apps was one of those things that finally made "interfaces"/"protocols" really click for me as a young developer. Just implement (some, not even all) method in "FooWidgetDelegate", and wire your delegate implementation into the existing widget. `willFrobulateTheBar` in your delegate is called just before a thing happens in the UI and you can usually interfere or modify with the behavior before the UI does it. Then `didFrobulateTheBar` is called after with the old and new values or whatever other context makes sense and you can hook in here for doing other updates in response to the UI getting an update. If you don't implement a protocol method, the default behavior happens, and preserving the default behavior is baked into the process, so you don't have to re-implement the whole widget's behavior just to modify part of it.
It's probably one of the better UI frameworks I think I've used (though admittedly a lot of that also is in part due to "InterfaceBuilder" magic and auto-wiring. Still I often wish for that sort of elegant "billions of hooks, but you only have to care about the ones you want to touch" experience when I've had to use other UI libraries.
I don't think this is obvious at all. We don't make the keystroke logs part of the commit history. We don't make the menu item selections part of the commit history. We don't make the 20 iterations you do while trying to debug an issue part of the commit history (well, maybe some people do but most people I know re-write the same file multiple times before committing, or rebase/squash intermediate commits into more useful logical commits. We don't make the search history part of the commit history. We don't make the discussion that two devs have about the project part of the commit history either.
Some of these things might be useful to preserve some of the time either in the commit history or along side it. For example, having some documentation for the intent behind a given series of commits and any assumptions made can be quite valuable in the future, but every single discussion between any two devs on a project as part of the commit history would be so much noise for very little gain. AI prompts and sessions seem to me to fall into that same bucket.
> well, maybe some people do but most people I know re-write the same file multiple times before committing, or rebase/squash intermediate commits into more useful logical commits
Right, agreed on this, we want a distillation, not documentation of every step.
> For example, having some documentation for the intent behind a given series of commits and any assumptions made can be quite valuable in the future, but every single discussion between any two devs on a project as part of the commit history would be so much noise for very little gain. AI prompts and sessions seem to me to fall into that same bucket.
Yes, documenting every single discussion is a waste / too much to process, but I do think prompts at least are pretty crucial relative to sessions. Prompts basically are the core intentions / motivations (skills aside). It is hard to say whether we really want earlier / later prompts, given how much context changes based on the early prompts, but having no info about prompts or sessions is a definite negative in vibe-coding, where review is weak and good documentation, comments, and commit messages are only weakly incentivized.
> Some of these things might be useful to preserve some of the time either in the commit history or along side it
Right, along side is fine to me as well. Just something has to make up for the fact that vibe-coding only appears faster (currently) if you ignore the fact it is weakly-reviewed and almost certainly incurring technical debt. Documenting some basic aspects of the vibe-coding process is the most basic and easy way to reduce these long-term costs.
EDIT: Also, as I said, information about the prompts quickly reveals competence / incompetence, and is crucial for management / business in hiring, promotions, managing token budgets, etc. Oh, and of course, one of the main purposes of code review was to teach. Now, that teaching has to shift toward teaching better prompting and AI use. That gets a lot harder with no documentation of the session!
> Also, as I said, information about the prompts quickly reveals competence / incompetence, and is crucial for management / business in hiring, promotions, managing token budgets, etc.
I fail to see why you would need that kind of information to find out if someone is not competent. This really sounds like an attempt at crazy micro-management.
The "distillation" that you want already exists in various forms: the commit message, the merge request description/comments, the code itself, etc.
Those can (and should) easily be reviewed.
Did you previously monitor which kind of web searches developpers where doing when working on a feature/bugfix? Or asked them to document all the thoughts that they had while doing so?
You have the source code though. That is the "reproducibility" bit you need. What extra reproducibility does having the prompts give you? Especially given that AI agents are non-deterministic in the first place. To me the idea that the prompts and sessions should be part of the commit history is akin to saying that the keystroke logs and commands issued to the IDE should be part of the commit history. Is it important to know that when the foo file was refactored the developer chose to do it by hand vs letting the IDE do it with an auto-refactor command vs just doing a simple find and replace? Maybe it is for code review purposes, but for "reproducibility" I don't think it is. You have the code that made build X and you have the code that made build X+1. As long as you can reliably recreate X and X+1 from what you have in the code, you have reproducibility.
> You have the source code though. That is the "reproducibility" bit you need.
I am talking about reproducing the (perhaps erroneous) logic or thinking or motivations in cases of bugs, not reproducing outputs perfectly. As you said, current LLM models are non-deterministic, so we can't have perfect reproducibility based on the prompts, but, when trying to fix a bug, having the basic prompts we can see if we run into similar issues given a bad prompt. This gives us information about whether the bad / bugged code was just a random spasm, or something reflecting bad / missing logic in the prompt.
> Is it important to know that when the foo file was refactored the developer chose to do it by hand vs letting the IDE do it with an auto-refactor command vs just doing a simple find and replace? Maybe it is for code review purposes, but for "reproducibility" I don't think it is.
I am really using "reproducibility" more abstractly here, and don't mean perfect reproducibility of the same code. I.e. consider this situation: "A developer said AI wrote this code according to these specs and prompt, which, according to all reviewers, shouldn't produce the errors and bad code we are seeing. Let's see if we can indeed reproduce similar code given their specs and prompt". The less evidence we have of the specifics of a session, the less reproducible their generated code is, in this sense.
Even with the exact same prompt and model, you can get dramatically different results especially after a few iterations of the agent loop. Generally you can't even rely on those though: most tools don't let you pick the model snapshot and don't let you change the system prompt. You would have to make sure you have the exact same user config too. Once the model runs code, you aren't going to get the same outputs in most cases (there will be date times, logging timestamps, different host names and user names etc.)
I generally avoid even reading the LLM's own text (and I wish it produced less of it really) because it will often explain away bugs convincingly and I don't want my review to be biased. (This isn't LLM specific though -- humans also do this and I try to review code without talking to the author whenever possible.)
> I am talking about reproducing the (perhaps erroneous) logic or thinking or motivations in cases of bugs
But "to what purpose" is where this all loses me. What do you gain from seeing what was said to the AI that generated the bug? To me it feels like these sorts of things will fall into 3 broad categories:
1) Underspecified design requirements
2) General design bugs arising from unconsidered edge cases
3) AI gone off the rails failures
For items in category 1, these are failures you already know how to diagnose with human developers and your design docs should already be recorded and preserved as part of your development lifecycle and you should be feeding those same human readable design documents to the AI. The session output here seems irrelevant to me as you have the input and you have the output and everything in between is not reproducible with an AI. At best, if you preserve the history you can possibly get a "why" answer out of it in the same way that you might ask a dev "why did you interpret A to mean B", but you're preserving an awful lot of noise and useless data int the hopes that the AI dropped something in it's output that shows you someplace your spec isn't specific or detailed enough that a simple human review of the spec wouldn't catch anyway once the bug is known.
For category 2, again this is no different from the human operator case and there's no value that I can see in confirming in the logs that the AI definitely didn't consider this edge case (or even did consider it and rejected it for some erroneous reason). AI models in the forms that folks are using them right now are not (yet? ever?) capable of learning from a post mortem discussion about something like that to improve their behavior going forward. And its not even clear to me that even if they were, you would need the output of the session as opposed to just telling the robot "hey at line 354 in foo.bar you assumed that A would never be possible, but no place in the code before that point asserts it, so in the future you should always check for the possibility of A because our system can't guarantee it will never occur."
And as for category 3, since it's going off the rails, the only real thing to learn is whether you need a new model entirely or if it was a random fluke, but since you have the inputs used and you know they're "correct", I don't see what the session gives you here either. To validate whether you need a new model, it seems that just feeding your input again and seeing if you get a similar "off the rails" result is sufficient. And if you don't get another "off the rails" result, I sincerely doubt your model is going to be capable of adequately diagnosing its own internal state to sort out why you got that result 3 months ago.
The source code is whatever is easiest for a human to understand. Committing AI-generated code without the prompts is like committing compiler-generated machine code.
Unless your company is investing in actually teaching your junior devs, this isn't really all that different than the days when jr devs just copied and pasted something out of stack overflow, or blindly copied entire class files around just to change 1 line in what could otherwise have been a shared method. And if your company is actually investing time and resources into teaching your junior devs, then whether they're copying and pasting from stack overflow, from another file in the project or from AI doesn't really matter.
In my experience it is the very rare junior dev that can learn what's good or bad about a given design on their own. Either they needed to be paired with a sr dev to look at things and explain why they might not want to something a given way, or they needed to wind up having to fix the mess they made when their code breaks something. AI doesn't change that.
I'll raise you one better. Cannabis is Schedule I, that means per the DEA there are no known safe medical uses for the drug. But if you synthesize out the primary active ingredient and bundle it in capsule, the DEA happily recognizes that as a mere Schedule III drug, and you get get a prescription for it even in states where cannabis remains illegal at the state level. It goes by the brand name Marinol.
> Also, on a personal level it rubs me the wrong way to have my insurance premiums go towards something that people could just do themselves, from something they did to themselves.
The usual note for this is your insurance premiums were already going towards that, just indirectly by way of paying for heart disease treatments, diabetes management and other secondary effect of obesity.
But I'd also like to propose that "could just do themselves" is carrying a lot of assumptions that may not hold for any individual. A few years back now I started a medication with the side effect of appetite suppression, and I learned something about myself. To the best of my ability to recall, I had never before starting that medication not been hungry. "Full" to me was a physical sensation of being unable to fit more food physically in my stomach, but even when I was "full" I was hungry. Luckily for myself as a teen and young adult I had an incredibly high metabolism. I could eat 3 meals a day, 3-4 bowls of cereal and milk as an "afternoon snack" after school and some late evening snacks while watching TV and I still was in the "almost underweight" category. It was in this context, a time when I could go to a fast food restaurant and order two meals just for myself and stay well inside a healthy weight range that I learned to eat as an adult. Eventually though, the metabolism slowed down, and I started packing on weight but the hunger never subsided. Oh sure, as I got older the idea of and ability to eat an entire pizza by myself slowly went away, but hungry was always there, so I was still always eating and always eating more than I should have.
And I did manage to lose weight on my own many times. Through extremely strict self control and portion control, multiple times I managed to lose 25, 30 even 50lbs, one painstaking week at a time. Every day was strict tracking and weighing of everything I ate, and many days were hard battles of "I know I'm hungry, but I've already hit my limit for the day, so I can't eat more", and going to bed extremely hungry with the hope that when I woke the next morning that feeling would have subsided a little. And it worked each time, until inevitably something happened to disrupt the routines and habits built over the months. Maybe it was a set of family emergencies that had me eating on the run, unable to properly monitor everything and adding some "stress eating" on top of it. Maybe it was running into "the holidays" where calories are cheap and abundant even if you are still keeping track. And sometimes it was just being unable to sustain the high degree of willpower it required to keep myself on the schedule. And what takes month of carefully losing 1lb a week to do only takes a month or two to almost completely undo.
Hunger is probably the closest thing I've ever experienced to an addiction. I've thankfully never had to battle an addiction for anything else, but when it comes to hunger that eternal gnawing was ever present and the more weight I lost by sheer force of will, ever distracting. If the idea popped into my head after lunch that "I'd like a snack", it was an idea that would not leave my head until either I'd given in and gotten a snack or forced myself to not give in and waited until dinner. But that forcing meant dedicating ever larger parts of my mental energy away from my work and tasks at hand to just convincing myself to not go get the snack. And worse, when the time for dinner finally came, I was already feeling "hungry" on top of my normal hunger state, so often not eating the snack just meant delaying the excess consumption to dinner or having to continue that fight at dinner. If it sounds exhausting, in a lot of ways it was. But of course, like you said I can "just do" this. It's simple CI < CO math. And yet it never stuck, in part because unlike a lot of other unhealthy habits you can pick up in your life, you cant just not eat. Yes you can eat different things, or eat healthier, both of which can help with weight problems, but you can't stop eating. You have to eat, the hunger is always there and the same thing the hunger wants is the same thing you NEED to literally survive.
But that medication with its appetite suppressant effect was a game changer for me. For the first time in over 30 years, I actually felt full. Not physically stuffed, but "done eating". I could eat a small lunch and think to myself "that was good, and I feel satisfied". For the first time, when the idea of an afternoon snack popped into my head, I could remind myself that dinner was in 2 hours and I needed to make sure I had room to eat that so the snack could wait, and that would be the end of it, no fight necessary because the hunger wasn't gnawing at me the whole time. When I first started, I was concerned that the medication was giving me anxiety attacks because about 6PM every day, I'd start getting this feeling of my stomach tying itself in knots, and this sensation of "needing something". And after a week or so it occurred to me that what I was feeling for the first time in my life was the feeling of transitioning from having been full and satiated to being hungry again. I'd never not been hungry before. And I know that sounds insane, because it sounded insane to me then. Before taking the medication if you'd asked me if I know what it felt like to be full or to not be hungry I would tell you that I did. But apparently I didn't, and I didn't know that until I started that medication. And for the first time since the weight started coming on, the weight I've lost is staying lost.
So yes, you can "just" eat better and less and control your portions and not eat so much. But from personal experience, it's a hell of a lot easier to have that will power when your body is giving you the right signals and isn't constantly pushing you over the limits.
At the end of 2023 and the beginning of 2024, I lost about 60lbs, and it was a basic calorie counting thing. For me, it wasn't too hard; I was able to get used to the hunger and after about a month the feeling of wanting to eat all the time was somewhat tolerable.
In May of 2024, I started taking Pristiq, and one of the side effects is a huge increase in appetite. Like you said, I would feel "full" in the sense that my stomach wouldn't fit anymore matter, but I was always hungry and pretty much perpetually craving sweets. I would get a whole large pizza for lunch, a large meal at Popeyes for dinner, and chase it down with snack cakes, and I would still be "hungry" the entire time.
I managed to undo all the progress I had made with my dieting and a bit extra, and it was kind of weird. It's not really "hard" to know what to do. Obviously everyone knows to eat less processed food, focus more on protein and fiber, etc, but despite me "knowing" this, it was strangely hard to actually do it.
I'm very thankful that I found out about Metformin. I'm not diabetic and never have been, but it's prescribed off-label for weight loss, and according to my doctor it can be useful in the particular case of "canceling out the appetite-increase from medication", and to my surprise it worked shockingly well. I'm still not quite down to my diet weight yet, but I'm down about 30lbs in the four months I've been taking it, and I don't really feel hungry all the time. I still enjoy eating unhealthy food, but food is considerably more transactional now: I eat food because I need energy to survive. I budget about 200 calories lower than what my smartwatch says I burn during the day. It's much easier to treat food as a more utilitarian necessity.
If anyone here is in the unfortunately situation of not having their insurance covering GLP-1 medication, I highly recommend seeing if you can get your doctor to prescribe metformin. It's been out of patent for decades and cost on the order of ~$5 a month [2] and there are very few side effects [2], so it's a relatively low-risk experiment.
The article also says that Uber sets various thresholds around this already and that their system flagged it at a score that was "higher than the late night average". What it doesn't tell us is what the threshold is/was for Pheonix, or how that threshold compares to other cities, or even how much higher the score was over the "average". Maybe their threshold for canceling a ride is 0.85, and the late night average is 0.8 in this system. So 0.81 puts the driver over the late night average as per the article and under the threshold for canceling the ride.
Your email provider has systems for detecting spam and removing it from your email. If an email comes into their system and falls under the threshold for being declared spam, but is over the average spam rating for emails in your account, have they done something wrong by allowing it through if it's spam? What if it wasn't spam and they removed it?
These sorts of headlines that espouse a "they knew something and so therefore they are liable" viewpoint seem to me to be more likely to result in companies not building safety measurement systems, or at a minimum not building proactive systems, so that they can avoid getting dragged and blamed for an assault because they chose thresholds that didn't prevent the assault. And not all measurement systems are granular enough or reliable enough to be exposed to end users. Imagine if they built a system that determined that if your driver was from a low income part of town and the passenger lived in a high income part of down the chance of an assault was "higher than the late night average". How long would it be before we saw a different lawsuit alleging that Uber discriminated against minority drivers by telling affluent white passengers that their low income minority drivers were "more likely than average" to assault them? I would hope that this verdict was reached on stronger reasoning than "they had an automated number and didn't say anything" but if it did, none of the articles so far have said what that reasoning was.
> system flagged it at a score that was "higher than the late night average"
Being charitable to the quality of Uber's legal team, I feel they could easily and compellingly have offered this defense.
It's telling that other documentary evidence highlighted that Uber decided sharing its reservations/acting on its system would be detrimental to growth.
And so what messaging do you propose Uber puts in their app for this? "Your driver has a higher than average probability of assaulting you, you may want to wait for another driver"? That will last until the first driver sues for slander. It's one thing to tell you that "prices are higher right now" it's a completely different thing to imply to you that your driver is a criminal.
reply