Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Linux Evening (fabiensanglard.net)
736 points by ingve on Dec 16, 2022 | hide | past | favorite | 317 comments


Thunderbolt devices appear in the OS as a PCIe switch, so you need two additional bus numbers (one for the Switch Upstream Port and one for the Switch Downstream Port). If the device is hotplugged to a port which has run out of bus numbers, you'll get this error message.

Mika Westerberg is constantly fine-tuning the allocation of PCI resources in the Linux kernel to avoid such scenarios. Some recent patches:

https://lore.kernel.org/linux-pci/20220905080232.36087-1-mik...

https://lore.kernel.org/linux-pci/20221130112221.66612-1-mik...

On macOS, it's possible to pause the PCI bus, reallocate resources and unpause the bus:

https://developer.apple.com/library/archive/documentation/Ha... (search for "Supporting PCIe Pause")

We don't have that on Linux unfortunately, so we depend on getting the initial resource allocation right.

Sergei Miroshnichenko has worked on such a reallocation feature for Linux but it hasn't been accepted into mainline yet and he hasn't posted a new version of his patches for almost two years, so the effort seems stalled:

https://lore.kernel.org/linux-pci/20201218174011.340514-1-s....


It sounds like the pause/unpause might be the way to fix this properly, since trying to be heuristically smarter sounds like a recipe for never-ending corner case bugs like the OP’s issue.

The patch for pausing and unpausing seems quite reasonable, except that it does require driver support (unsurprising - you’re literally reallocating the resources used by the driver!). I suppose if you had at least a few movable devices then you should be ok in the event of a hotplug event, so you’d have to hope that enough drivers bother to support the feature.

I wonder what is necessary to get people to care about the patch enough to fix it up and mainline it? I suppose the problem it fixes is still niche enough that not so many people are clamoring for the fix.


The PCI resource allocation code is fairly intricate and everyone is scared that changing it may cause regressions. Sergei's patch set is quite intrusive and it would be necessary to somehow break it up into smaller pieces that are slowly fed into mainline over several release cycles, always watching out for regression reports. So, the problem is known, but the engineers working on PCI code in the kernel are given higher priority stuff to work on by their employers, hence the issue hasn't gotten the attention it deserves.

Actually I forgot to mention there's another solution: A PCIe feature called Flattening Portal Bridge (PCIe Base Spec r6.0 section 6.26). That was introduced with PCIe 5.0. It's more likely that FPB support is added in mainline than the pause/unpause feature. It's supported by recent Thunderbolt chips and it's an official feature of the PCIe standard, so companies will prefer dedicating resources to it rather than some non-standard approach.


In the dynamic use cases, the PCIe specs is kind of shabby on the addressing space: it is theorically fixed by FPB.

I guess this is sorry for those niche hardware use cases.

Isn't FPB into PCIe 4.0? (I am not a SIG member, cannot read the specs).


I meant, I know about PCIe addressing (from the web, linux code, and a book I read years ago), but I cannot read the modern specific FPB specs.


Would a workaround be that whenever the kernel detects this happening (and it did, it dmesg printed it) that it somehow increases an internal counter so on next reboot there will be more resources?

This would require the kernel being able to either update its own command line somehow, or having some permanent storage somewhere it could store it.

Or this could all be done by systemd - detect that message, increase the resource, next reboot will fix it.


Kernel state does not survive reboots afaik.

That would need help from userland, which is not involved in the early boot process.

You could I guess change kernel init parameters and save that in your boot loader, but that is very hackish.


Maybe it can be introduced gradually, making the reallocation an optional feature that a driver might support. Then drivers can independently implement the resource reallocation feature.

Mainline drivers can move gradually. If they want to be nice for out-of-tree drivers then they can describe a timeline for deprecating and removing the support for non-reallocating drivers.


What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?


> What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?

With open source and mainlined drivers, it's very difficult to change all the drivers and ensure they work.

Without open source and mainlined drivers, it becomes impossible.


Possibly it is hard, tedious, or the people able to fix it don’t think it is worth the effort.

Open source projects rely on volunteers mostly so it isn’t like there’s some outside force to appeal to. If nobody volunteers a solution, then it isn’t important enough to solve. The point is that, if it were important enough to fix, anybody with the requisite skills could do so.


[flagged]


You are free to comment on the question raised, instead of a dismissive empty ad-hom.


While tongue in cheek, the answer is accurate: open source does not mean "free service contract", it means that you can take the code and modify yourself (and preferably upstream the fix).

Patches come both from vendors and users experiencing an issue. Vendors take care of most things, but for esoteric problems you might only have a handful of people experiencing it. The vendor is unlikely to care, so if you do not write the patch or pay someone to do it, who will?

Still better than the competition, where such problems will never be fixed unless it generates sufficient bad PR...


You're doing the same thing, dragging the original conversation off into berating the commentor for not fixing it themselves because you owe them nothing and they shouldn't be so entitled.

> "Still better than the competition, where such problems will never be fixed unless it generates sufficient bad PR..."

The competition comes under "pay someone else to do it".


What OS are you using where you can get the vendor to implement kernel features to fix obscure driver issues?

I'm sure there's an amount of money you can throw at Microsoft to get something done. I don't know how much it is, but I'm guessing it's more than it would cost to find a vendor to do it for Linux.

The serious answer to " What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?" is "There are many points to this, but one of them is that it's possible to fix them to support it, if someone wants to put in the effort. It's worth something that it's theoretically possible if you really need it, even if no one else has done it yet.".

The answer "You can do it yourself" is meant to help them understand "Anyone can do it, someone needs to step up to the plate. But it's also true that it costs resources. If you're wondering why no one else has done it yet, it's the same reason you haven't done it yet".


With enough money you can get that kind of support from any Linux vendor, eg RedHat or Oracle.


> You're doing the same thing, dragging the original conversation off into berating the commentor for not fixing it themselves because you owe them nothing and they shouldn't be so entitled.

Ah, this argument again. Yeah, the maintainers owe you nothing, as they have already worked their asses off to give you something for free. You have the right to make polite bug reports and discuss fixes, but no one is entitled to force volunteers to do work.

But, that is not the same thing as everyone having to fix their own shit. 100 million users does not need 100 million developers.

What matters is that the users that have issues can fix issues, and if the issue affects enough people, it will eventually affect someone able and willing to fix it. That is why open source works, but it requires that some people put in the effort, and many learn to do it exactly when they get annoyed by a bug.

So yeah, if you are not willing to wait for someone else to come around and volunteer to fix it, patch it yourself or pay someone else to do it. That's how the system works, regardless of how demeaning you feel this is to non-developers or developers that feel that their time is more valuable than that of others.

> The competition comes under "pay someone else to do it".

Sure, if you have enough money to convince Apple or Microsoft specifically to prioritize fixing your issue (which may be in an unsupported or deprecated configuration) above what else they were doing, which would cost a whole lot more than just engineer and manager time. You have no alternative, as only your specific vendor can make the fix. Realistically speaking, if you had that kind of money you probably already have employed engineers that you could get to fix your open source issues for you and would not be arguing on hacker news about the need to write patches.

For open source, you don't have to convince anyone in particular. Can't convince the first person you try with money? Just ask the next person, anyone can submit the patch.


I think you are mixing two arguments. One is a good, valid, argument which is about how volunteer maintainers don't owe anyone anything, and absolutely don't deserve to be harassed, insulted, coerced, guilt tripped, etc. And the other is is about internet Linux commenters (away from bug trackers and issue lists) replying in ways that close down and end conversation of anything which isn't toeing the 'party line' of how great Linux/FOSS/libre/gratis software/etc. is.

The parent comment by AnIdiotOnTheNet was not in the context of bug reports filed to maintainers, or insulting anyone, or demanding anything specific. The parent of that said that the patch looked good "but" would need driver support, perhaps suggesting that's a showstopper. AnIdiotOnTheNet asked what the point of having open source drivers is if they can't be fixed, or charitably steelmanned read as "the drivers are open so they can be fixed to work with the patch". Blueflow's reply "you are free to submit a patch" is technically correct, but low value - few people on HN aren't aware of that. The following "or request a refund" is conversation ending, "fix it or shut up, stop talking about it".

It's a common reply format on internet Linux discussions which is closer to 'silence wrongthink' or 'cancel culture' than tech discussion.

> "So yeah, if you are not willing to wait for someone else to come around and volunteer to fix it, patch it yourself or pay someone else to do it."

Or ... talk about it, rant about it, 'raise awareness', exercise freedom of speech. "Patch it or shut up" aren't the only options. And look, dkozel replied with a long and technically detailed comment[1] and didn't need anyone jumping in to silence unapproved questions.

[1] https://news.ycombinator.com/item?id=34016094


Your ideas and intentions might be good and noble, but in the end of the day its the contributors and maintainers who burn out. And from my impression, most people in OSS are already fed up with supporting users. And I'm too. Telling users off like "fix it or shut up, stop talking about it" is the necessary step to protect yourself.


What it sounds like you’re saying is “If you don’t know how to code, GTFO” because in all likelihood the parent made this comment because they’re not capable.


Not quite. If you are a non-developer not patient enough to wait for others to volunteer, or are a developer thinking that your time is somehow more valuable than those of the maintainers, then GTFO. :)

Waiting patiently and politely reporting bugs is a fine strategy: If a problem affects enough people, it will eventually affect a developer capable and willing to fix it. If you want it go faster, you will have to get your hands dirty - many contributors acquired the skills exactly because they were annoyed by an issue and decided to fix it.


Where did AnIdiotOnTheNet's comment include not being a developer, not being patient, or thinking their time is more valuable than others?

Why is it so much more important to put someone in their place and win internet points with ad-homs in the "community" which supposedly values freedom?


"Whats the point in ${enormous amount of work others did for me for free} ..." is never a polite move. Neither is your second sentence.

You two are just rude. No amount of freedom forces other people to bear with that. And if you think its not rude, still, the world is large enough to avoid each other.


> ""Whats the point in ${enormous amount of work others did for me for free} ...""

That is the least charitable interpretation of it, and I think not at all what it actually says.A much better reading of it is, paraphrased:

"The PCI-e patch is nice, but it would be a dealbreaker waiting on driver support."

"The drivers are open source and checked into the same tree, so there's no dealbreaker there in waiting for vendors or coordinating with third party organisations and their release schedules {and that's the point of desiring open source drivers, so the system isn't hobbled by binary blob drivers and vendor release schedules}".


> dealbreaker

Very poor choice of words. If the deal is bad, request a refund!

I understand what you are trying to say, but i think that's just entitlement.

Just accept that, as a non-contributing OSS user, you have zero leverage over which features other people pour their energy into. If you do, that's a charitable exception and happens at the generosity of the one doing the work for you.

Edit: Let me quote the licensing terms that you agreed to and gave you permission to use the linux kernel at all:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.


Can you get off your high horse about finding someone to punish for some imaginary sin which hasn't even been committed in this thread, and respond to some of the things I said and the points I made?

The "deal" in question was imaginary person A offering patches to the kernel to fix the articles's PCIe allocation, and the Kernel maintainers hypothetically refusing the patches on the grounds that it would break compatibility with drivers and so it cannot happen. Then the comment that the drivers could be updated to match, so the patch could hypothetically happen. There is NO entitlement anywhere in this imaginary scenario, there is no demand for anyone to write such patch, no expectation that someone get right on updating the drivers, no insinuation that someone owes a review of such a patch, nothing that you are so het up about has happened or been implied to happen, been demanded, expected, requested, suggested or implied.

> "Just accept that, as a non-contributing OSS user, you have zero leverage over which features other people pour their energy into."

Where did you come up with the idea that I am a non-contributing OSS user? Because it drives your superiority fantasy, I suspect, where "putdowns of the inferior" are the order of the day.


I wouldn't exactly call the comment you're replying to an ad hominem attack.


Why not? You don’t think it changes the topic from being about the content of the comment (open source drivers) to being about the person who wrote the comment (what they should and shouldn’t be permitted to say based on how much they paid or didn’t pay)?


If the problem lies in the entitlement of a person, an appropriate response is going to be an ad hominem, and rightfully so.


You have not shown any signs of this spectre of entitlement haunting your every comment. If you had, an appropriate response would be educational, helpful, or perhaps a link to Rich Hickey's Gist[1].

By your comments, it would be appropriate to ad-hom you now for your apparent entitlement to controling other people's speech about OSS, right?

[1] https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...


Its simpler: your attitude is bad, i told you off, you have no intentions to change your attitude, neither do i.

And if thats "controling other people's speech" to you, then yes, your attitude is the problem. Realistically, i'm just some random dude on this forum, telling you off is the worst thing i can do to you.

Enjoy your refund.


> "you have no intentions to change your attitude, neither do i."

Multiple times now I have pointed out that I have not shown the behaviours you are accusing me of. The difference of who has intentions to change is that you are wrong.

> “i'm just some random dude on this forum, telling you of

You are doing so like a teacher with the wrong understanding, and despite being corrected you are intent on making sure someone gets told off whether it’s appropriate or not.

> Enjoy your refund.

You are free to link to the place in this thread where I personally said I was unsatisfied with some piece of software, or unsatisfied with the work someone was doing on it, or in any way mentioned desiring a refund or expecting more work? I think you won't be able to find such a place, so your 'clever comment' falls flat.


The key takeaway, especially for the author who wants things to "just work", is that he should have used a USB external disk, instead of a Thunderbolt external disk. OSes tend to have more issues with Thunderbolt disks (such as this issue you explained in details) than with USB disks, because the latter is more common, so more corner case bugs have been eliminated.


Yeah, first thing I thought was oh that drive looks esoteric.


Any book or documentation you recommend to read to someone interested in getting involved?


> I spent several hours fixing a problem and I learned next to nothing in the process.

This is probably true for this specific case here, but my experience with fixing stuff in Linux is actually the opposite. I learnt a lot doing so, and learnt stuff that turned out to be later useful in very unexpected spots.

Back when I was a teen and using Windows, I've spent countless hours fiddling in stuff in regedit and other atrocities and it feels like I never learnt anything useful in the long run.


I think the issue is that frequently the problem is with a small part of a complex subsystem that you don't understand (or perhaps didn't even know existed) before something went wrong. This sort of problem is extremely time consuming to debug via "properly" learning about the system, so you end up (much like Fabien) taking a fix from somewhere without fully understanding it.

There's also a timing issue. Often when something goes wrong it's at a random time when I'm probably intending to do something else. I don't want to put off that other thing even more by taking a "slower" route to fixing the issue.

You can of course still learn a lot along the way, as you say. So, it's not all lost, but it does seem sub-optimal.


Part of it is the binary config nature of Regedit though. I can't, from first or second principles, reason about which binary key to flip in order to change the behavior I'm looking for. Was it HKLM/ffebdcaaf, or HKLM/afbedccfh? Meanwhile if, eg dns got broken on my Linux box, there's a series of config files and daemons to check in order to figure out what's broken and then to get it working again. There's no way to learn anything with Regedit other than knowing you can just Google for the name of the key to set by looking up the problem.


Eh, having worked with both systems, I'd say they're roughly equally impenetrable. For sure, the registry sucks. But at least it's uniform and scriptable. Once you know what needs to be changed, you can write a .reg file and be reasonably confident it will at least change the registry in a valid way.

On the other hand, if the solution to a problem on Linux is to change some dotfile controlling some daemon, reliably scripting it can be difficult to impossible. Did the distro put the config file somewhere weird? Is awk good enough here? What if the key I'm looking to update is found in a comment, and what is the commenting structure for this file anyway? What if the daemon rewrites its config file on exit [0]? Or has a config file with an extremely complex format [1]? Or I need to poke several /sys files in a particular order [2]?

Both systems are hard to reason about, especially if you don't know them. But Linux isn't easy to approach for the first time, and there are advantages to the way Windows does things.

[0]: https://askubuntu.com/questions/251797/transmission-daemon-k... [1]: https://www.sudo.ws/docs/man/1.8.15/sudoers.man/#SUDOERS_FIL... [2]: https://www.youtube.com/watch?v=9-IWMbJXoLM


Usually the adoption and uniformity as well as the generally higher quality of Windows, in addition to the availability of good debugging tools (which work) makes troubleshooting easier.

I recently had an OpenGL related issue on Linux, I googled the error message and got 1 search result. Now I can speculate, is my program wrong? Or is it the Nvidia driver, perhaps Xorg? Did the Ubuntu devs configure something wrong etc.

Why won't Nsight or Renderman work on my machine?

Windows has by contrast a significantly larger userbase, an architecture that's set in stone, meaning I can find relevant answers from a decade ago, and there's thankfully no Windows 'distributions' so the issue I face is likely to appear on any person's machine.


I think the two problem spaces are slightly different.

- for windows, the problem is solved by:

  1) common sense troubleshooting
  2) web search for the problem
  3) ask the developer
- for linux, you also get:

  4) break out the source
  5) push an update


Usually my go-to for "Linux problems" is "tear down the VM and start a new one", but a few days ago my Manjaro rig at home lost power, and bluetooth wouldn't work upon restart. I wracked my brain, pored through forum threads, man pages, and LFS, but nothing really helped. It looked like several different problems at once, but none of the individual fixes worked, or only worked for minutes at a time. I stepped away from the problem for a while, as one does, while I mused about the way troubleshooting works, and how my early life with Windows and now modern containerization has predisposed me to certain types of solu-- wait, I've got it! rmmod the bluetooth modules, modprobe 'em back in. Everything worked flawlessly. I'm not sure what I learned.

*sigh*


>I'm not sure what I learned.

"Have you tried turning it off and on again?"


I've done this enough, but don't stop there as it's likely to happen again. When I resort to such a solution, I take a look at the syslogs. More often than not, something there will tell me why a simple restart of a service, module, whatever was the answer.

I had a similar problem as the article with a standard USB 3 enclosure. Two PCs, identical OS (oS Tumbleweed), same kernel version even. modprobe -r then modprobe usbstorage and it worked. An obscure error in the logs pointed to the uas module which for some reason failed to reload, enabling the enclosure to work.

Oddly, the other PC on which it worked was loading uas.


Exactly. My tendency is to jump in and try to root cause problems but nowadays the most productive solution is almost always some kind of a restart.


I always refer to this as Computer Science 101.


Yes, I remember, even up until very recently, having to google help on Windows. You'd get webpages full of adverts with a heading '12 ways to fix Win10 freezing during upgrade bug!'. And the first two would be something like 'switch it off and on again', 'try again', 'reconnect to your wifi'.

I always felt I could dive deeper with linux problems _if I wanted to_, and that weirdly gives me some confidence that I can fix it.


And after you scroll past the obvious solutions, "Install our Fix-O-Matic software for $29.99!" because the page is actually an advertisement cloaked as a help page.


Linux is like driving a car with your hands in the engine


Windows is like driving a car that is actually a city bus full of hostile randos that take a lot of effort to constantly remove.

MacOS is like driving a car by yelling at the driver from the kids seat in the back.

In the end, learning how an engine works to maintain one myself is not so bad.


This contradicts my experience.

I use Linux every day and I never touch the engine.

I also set up Linux for computer-illiterate old ladies. They love it.


Contradicts my experience as well.

I started out using DOS in the mid 90's to play games, used Windows 3.1, skipped over to 98SE then XP and 7.

I have used Macs before, not enough to make an impression.

I tried Linux back when I was in collage during the XP days but it always seemed to need more tweaking than my XP install and the lack of gaming kept me on Windows.

Currently I am running Linux Mint. I install the OS and go. No need to tweak or mess with command line.

I am not that interested in which OS I am running aside from the fact that I dislike telemetry and forced updates. I want the OS to get our of the way so that I can do whatever I am trying to do, Linux seems to be there now. It feels like Windows has regressed, with setting pages buried in 5 different places.


Some people just don't like the Linux (or Unix) way of doing things. I personally can't understand the predisposition against it, but hey, whatever works for them I guess.


For your average user it's exactly the same desktop as windows and mac : there's menus and icons, you click on stuff, there's a browser... What's this "way" that they don't like?

They don't like free, stable and fast? They don't like how it isn't going after your wallet every 5 minutes?

It's gotta be propaganda at work. It's the only explanation.


Ubuntu desktop is anything but stable. Nuke-and-pave is the only sane solution to the OS corruption issues. The GUI is more stripped down than a holocaust victim. They still haven't figured out drag and drop. And on top of that, everyone swears such-and-such distribution is actually the good one, but it never is. The desktop is in permanent demo mode. A back-burner school project that turned into a part-time hobby.

Maybe you just use it for web browsing. But for everyone else, there's Windows. Which is still crap but at least you can set an environment variable without doing a research project into the differences between .bash_profile, .bashrc, .profile, and /etc/environment. None of which have a GUI. And the best part is that none of them can ever be renamed because muh scripts would break and the entire OS would come crashing down.


Ok, ya, last time I used Ubuntu I didn't like it either. I like Debian (Ubuntu is derived from it apparently), with MATE desktop.


It is about what I would expect from an OS that considers UI to be bloatware and viral anti-business licenses to be more free than permissive ones.


Freedom 0 of free software, use the software for any purpose, includes commercial usage. There's nothing anti-business about it.


I agree. Using software modifications as a competitive advantage in business does not violate free software licenses. It does, however, violate non-free licenses like GPL


It's just the equivalent of a consumer protection law: you have to sell people the software, not a build artifact.


> I also set up Linux for computer-illiterate old ladies. They love it.

I think you're trying to say that they don't have problems with the browser, and maybe an email client. If they're computer-illiterate, they don't know enough about an operating system to have an opinion at all, other than being happy about it turning on and off when they want.

Don't mistake that for loving Linux, because it is not really praise of Linux, it is praise for a computer that works well enough to run a web browser, which is an extremely low bar.


Linux can manage a browser, and email client, and a word processor, on a early 2000s computer found for $50 at a thrift store.

Meanwhile the latest supported versions of MacOS or Windows will not work on such a machine. Those ecosystems demand you continually buy new hardware.

I think is -is- Linux they love here.


Out of curiosity, what distro do you use and what is your distro of choice for the old ladies?


Not OP, but chiming in nevertheless, as I do also "set up Linux for computer-illiterate old ladies".

They all roll Debian stable with unattended_upgrades and it's an absolute joy of stability.

Not that I would use Debian stable for my personal machines: I want the latest shiniest and I want to be close to upstream to report bugs when they're fresh and easy to fix by the maintainer (yes, "I use arch btw" indeed), but for "computer-illiterate" people, stability trumps everything else.

The "computer illiterate" isn't amused by constant breaking behavior / visual changes. They want the thing to work reliably and unsurprisingly. Debian delivers this and never breaks. Once every major dist-upgrade I review changes with them, and they're good to go until next time.


what about Fedora workstation for the old-lady use case? I'm about to give my 12 year old non-technical daughter a linux laptop, and wondering on distro. I'm thinking Fedora but open to be convinced otherwise?


For your daughter tinkering with a distro, yeah Fedora sounds awesome :)

For the old ladies, no: Fedora has a short release cycle, and no LTS; not great for maximal stability. "a version of Fedora is usually supported for at least 13 months" ( https://endoflife.date/fedora ), while Debian is security-supported at least 5 years! ( https://endoflife.date/debian )

Debian's whole project architecture & release cycle is built towards being stable bedrock, I see virtually no reason to use anything else if you aim at max. stability and don't care about running old (but still secure!) software.


Pick something you're willing to maintain. Stable releases tend to reduce maintenance and surprises.


MX. Fast. Stable as a rock.


Nice. What is your go to desktop environment for them?


- Generally, GNOME with the dash-to-panel/dash-to-dock extension-du-jour to have a left-side panel with app buttons (so, a "Ubuntu/Unity"-like look). It baffles me how stock GNOME still insists not to provide a left/bottom "dock" / task switcher as a built-in option, and force you to go to the menu every time you want to access / switch to an app, but here we are with extensions oÔ.

- That being said, a well-configured/simplified Plasma these would certainly be all well (or not, see my final paragraph below about locking things down; GNOME is less infinitely configurable than KDE/Plasma, so there's less room for user fuckup). Also I'm certainly biased towards GNOME because that's what I choose to use for my machines. Also also, I go where there are the most eyeballs (again, for stability & maximal guarantee that bugs are seen), and for now it's GNOME.

- Exception: one grandma has a particularly old laptop (x86-32 old :D) with meager RAM, so for her setup I went with MATE. I hesitated between LxQt and MATE, tried both, and MATE (at the time) seemed a tad more polished.

Finally, since I have your interest and GUIs are a subject I love ranting about, one more thing you need to do when setting up an old ladies machine, is to LOCK DOWN ALL THE THINGS YOU CAN:

- LibreOffice toolbars that can be accidentally dragged, then maybe dragged offscreen or closed? LOCK THEM.

- Thunderbird folder column titles that will change email ordering if accidentally clicked? HIDE THEM with userChrome.css until https://bugzilla.mozilla.org/show_bug.cgi?id=237051 is fixed.

- Etc etc, when I'm done provisioning such a machine, I spend an hour playing with it and wondering "How could someone haphazardly clicking and dragging everywhere break/mismanage this?" , and I disable/lock/hide everything I can. There is a million configurable / breakable things that we power users don't imagine being confusing to a user, because we know the features and we're precise at clicking on stuff. But to non-computerists, these things are veeeeery confusing! They'll email you with a gibberish issue description when invoked accidentally, and when it happens too often they get scared. There's potential to build a whole distribution aimed at such users, with everything properly locked down.


For thunderbird column header, there is an extension that locks them (unless you're ctrl+clicking them)

Might be more intuitive than locking them?


Thanks! Yup, if you look at the bugzilla link I shared above, you'll see I mention this extension in the last comment: https://bugzilla.mozilla.org/show_bug.cgi?id=237051#c11 .

I couldn't check it as I run Tb beta (which the extension doesn't support last time I check), but I might give it a try next time I maintain one of these machines ..... or not:

0. They never ever need to click it to re-order. Why bother with an unneeded feature?

1. Extensions break! UserChrome.css too, but maybe less so :) , and a CSS rule is simpler/smaller in added complexity & potential for bugs than an extension.

2. Also, hiding the columns means one less bit of chrome to read/parse. Hiding stuff avoids misclicks, but also increases GUI legibility for people with bad eyesight.


Mate


I have been setting up Ubuntu for non-technical folks for a decade. Chromebooks work great too, but I cannot justify recommending them given the proprietary Google bits included. De-Googled chromeos would probably suffice for most people though.


Honestly if you have older hardware, like a couple years old most of the most popular distros are pretty stable.

Ubuntu on hardware a few years old will chug along without issues more often than not especially if you are just using it as a glorified thin client for Chrome which is an increasingly large base of users.

There are of course some gotchas with things like integrated graphics and dedicated graphics in laptops or some device drivers for peripherals like printers (it is still linux after all, year of the linux desktop any year now aha)


Debian and Debian.


And thinking "why doesn't everybody want to do this?"


Can I steal this? I just started a microblogging career and it's too good.


> but my experience with fixing stuff in Linux is actually the opposite. I learnt a lot doing so, and learnt stuff that turned out to be later useful in very unexpected spots.

Same, in fact it's how I fell ass-backwards into my career path.


I think unless one uses Linux on daily basis it's really difficult to sink in the knowledge. I decided to install an Archlinux VM and see what happens.

I'm kinda disappointed to myself as I found out I didn't like too much trouble so I'll probably never be a good/great programmer.


Installing Arch is much closer to system administration than programming - they may be related somewhat and attractive to similar people; but you can be a quite successful programmer and barely be able to install Ubuntu.

I know people who have written kernel-level Linux drivers who have difficulty upgrading macOS. They're separate skillsets.


I was going to say, I know a lot of programmers, maybe even most, who avoid Linux like the plague because they don't want to waste cognitive cycles on fixing their broken machines. I have found myself in this same place after a solid 12 years or so of Linux use for pretty much everything, home, work, etc. I honestly kinda want to get back into it, but there are other priorities. As long as I have access to some kind of unix-y command line from somewhere I'm good.


That's how WSL gets traction being good enough as Unixy command line


Thanks for sharing. I'm a bit surprised about the second paragraph but then I realize it is indeed two skillsets.

I think I'm split between career growth and hobby. My career is much closer to a sys admin/devops than a low level programmer, but my hobby probably is closer to the later. Of course it could be just my fancy about low level programming that fascinates me as after all I have never done any low level programming except some entry level MCU programming.


>I'm a bit surprised about the second paragraph but then I realize it is indeed two skillsets.

I started out my career on the sysadmin side of things before becoming a developer and that doesn't surprise me at all. Most devs I work with have little to no understanding of things like file permissions, how networking works beyond making HTTP calls with $SomeRequestLibrary in their programming language of choice, or how services/daemons work in Windows/Linux.


I wouldn't write it off just yet, at least if you want to be a good programmer! While there might be some correlation, there's plenty of fantastic programmers who don't want the hassle and want to focus on a specific problem and just use Windows/macOS and an easy to setup/low configuration editor. One particularly great TA I had in college comes to mind...


Thanks, but I actually wanted to learn low level stuffs, or I thought I wanted. Anyway I'll see what happens. So far nothing really clicks for me but if that's it then that's it.


People say "Arch makes you learn how Linux really works", but I don't think that's a good way to put it.

Arch teaches you about the most boring parts of how linux works: the specific idiosyncrasies and configuration minutiae of a ton of libraries and programs (GNU coreutils, systemd, udev, mesa, glib, X11, Xlib, xdg, dbus, etc.)

Arch will usually not teach you the really interesting "low level" things about Linux: How the kernel works, how threads are created, etc.


I'd say it teaches you how to use Linux. It's not the particular details of each config file, but the fact that there are config files, and where to look for them. By the end of an Arch install, you know at least 85% of how to manage your system, and wiki will take you to let's say 98%.


gentoo gets pretty close for a full OS system.


No it doesn't. But that's fine, it's not the point of a distro to teach you about interrupts and syscalls and all that nonsense, the point of a distro is to give you software to install. That said, Gentoo does do a good job of teaching you how annoying it is to try and figure out exactly which kernel options you need to activate to get a working system. I always run out of patience before I get it working, though (yes, I know about make localmodconfig).


What types of low-level things did you want to learn?

A lot of what seem low-level in Linux are implementation-specific things. For example, how you set up wifi or how you manage your firewall. Knowledge of these things gives you less long-lived and less transferable skills than knowledge of protocols like TCP/IP which is shared by everything that is plugged to the internet. It doesn't make you a better programmer, unless these daemons and tools are what you want to write software for.

Also, your options are not only Arch Linux and Windows/Mac. There are more desktop-ready distributions, too, like Ubuntu or Fedora. Linux lets you still inspect the plumbing if you install one of those.


> What types of low-level things did you want to learn?

To be honest, I've dabbed in a lot of low level stuffs but never went deep enough to make any impact, career-wise at least. For example I completed a full course on MCU programming, completed the first half of nand2tetris, read the first few hundreds of pages of "Beginner Reverse Engineering" (basically the part that teaches one to recognize C from decompiled assembly code), and many others. My most recent low level adventure was the book "Practical Binary Analysis" which I planned to use the whole holiday to push maybe a few chapters.

But I never drilled deep enough into any of the topics. I believe I have the brain to drill at least a few more chapters deeper for any of them, but I don't have the mental power to do it. I secretly want someone to put me into some sort of prison and keep me from getting out until I completed all previous projects and learnings. But of course I need to figure out a way to deal with the problem.

Anyway, really appreciate your answer, and probably I'll still install Archlinux, play with it a few days and then lose interest -- just another failed project to spend time.


and this article will now be much more higher in Google results for the next person searching :)


> > Out of curiosity, how did you come up with this solution?

> The author, dkozel, never came back to answer. I imagine they typed the solution on a 40% keyboard featuring unmarked keys and then rolled into the sunset on a Segway for which they had compiled the kernel themselves. Completely oblivious of their awesomeness and of how many people would later find solace in their prose.

Here's how I would have started to find the answer:

  1. Open https://lxr.linux.no/, search for
     "No bus number available for hot-added bridge"
  2. Open the three files it finds.
  3. ^F that same string in each file, then notice
     that
     https://lxr.linux.no/#linux+v6.0.9/drivers/pci/probe.c#L3336
     is it.
  4. Chase down `dev->bus->busn_ref->end`.
     Er, well, this step is hard because `end` is
     so generic.
  5. git clone linux and use cscope to index it.
  6. Search for assignments to `end`, filtering with
     egrep for bus and pci.
  7. Weed through it all until I find
     https://lxr.linux.no/#linux+v6.0.9/drivers/pci/probe.c#L2964
  8. Search for assignments to `pci_hotplug_bus_size`, find
     https://lxr.linux.no/#linux+v6.0.9/drivers/pci/pci.c#L6882
  9. Search for `pci_hotplug_bus_size` and also find
     https://lxr.linux.no/#linux+v6.0.9/Documentation/admin-guide/kernel-parameters.txt#L4267
     which says:

  hpbussize=nn    The minimum amount of additional bus numbers
                  reserved for buses below a hotplug bridge.
                  Default is 1.
But @dkozel probably just knew the answer.

I do find it hard to believe that the default for this setting is 1 though!


https://sourcegraph.com/github.com/torvalds/linux is a more featured search engine with limited go-to-definition functionality, and removes the need to clone locally and search in many cases. Can be used with any GitHub repo. Just make sure to first enable "case sensitivity" and "structural search" using the buttons in the right of the search textbox so that the search is exact (which is a task that GitHub's own search fails at).


Ah yes, I tend to forget. What happens is that as soon as I'm looking for "assignments to..." I just reach for cscope (well, where the language is supported by cscope).


Thanks for this!


Does this just mean that they added 51 additional bus numbers?

I am curious if the kernel could just bump this number up, especially if Thunderbolt is getting more popular.


I doubt you need the work but, man, you'd make a dream Integration engineer. Like on a scale of 1 to 1,000 you sound like a perfect 1000.


I mean, this is... basic stuff. Well, ok, one has to learn to navigate enormous codebases, but then when one has, this sort of thing is easy. I'm not a genius! And this skill can be taught and learned, and nothing helps like actually doing it. There was a time when I had no clue, you know.

Your mention of integration does bring back some memories of a bank I worked at where we had source code to proprietary operating systems, and it was sooo useful to a) debug things, b) find undocumented things and ways to work around bugs and limiations, and I was pretty much the only one using that source code. At the time I didn't really understand that (b) is risky, but I managed not to get burned. That was how I started learning how to navigate huge codebases.


Kernel parameters are usually documented in the kernel-parameters.txt file in the source. See https://github.com/torvalds/linux/blob/master/Documentation/...:

  hpbussize=nn The minimum amount of additional bus numbers
    reserved for buses below a hotplug bridge.
    Default is 1.
The way to get from the error "No bus number available for hot-added bridge" at https://github.com/torvalds/linux/blob/master/drivers/pci/pr... to this specific kernel param is still a bit involved.


That doc string raises so many questions. First, it says minimum which to me implies that in some cases larger value is used, but no clue when. Secondly, why is the default 1 and what is the allowed range of values? Being 1 implies to me that there is some cost associated with the reservations, otherwise the default would be much higher? Is the 0x33 from the article completely random, would 2 worked just as well? What are the implications of using high value here?


What's the approach used by OSes that "just work" for things like this? Is this due to the more monolithic kernel of Linux making more things required to be known up front than e.g. in Windows?


I always like this story about MacOs.

- Most OSes do a full dhcp look up when they connect to a network even if they have connected to the network before. Wifi saved networks is a good example of a network where you have some historical information on the connection

- MacOs however keeps track of the IP addresses that it saw for each of the networks it has connected to recently. If it sees a network it's been on before, it first tries to just use the old IP address. This is under the assumption that the lease is still tied to the user's laptop.

- In the best case, the laptop gets a working IP very quickly and in the worst case, you just do dhcp again

- From a user's perspective, if they walk into a meeting with a bunch of folks with non Mac laptops, it will seem like the Mac connects much faster than everyone else's machine

When I read this story, the author pointed out that many people say "Macs just feel like they work better!" and used this as an example. Granted, this partly comes from owning the ENTIRE stack from hardware to drivers to OS etc which is something only Mac can do.


...and the network intrusion detection system throws up all sorts of alarm bells about IPs being spoofed and duplicate IPs in use.

I assume MacOS actually only re-uses IPs if it sees the lease hasn't already expired but it might be dumber than that.


IIRC it does, and otherwise throws out (or used to) an ARP check (or something like that) to see if any one on the local link is using the IP. A properly behaved DHCP server would not hand the IP again to someone else if the lease hasn't expired.


> properly behaved DHCP server

One of the common issues with this is that home routers don't usually persist leases across power cycles.


IIRC DHCP leases should be refreshed at their half-time. Or perhaps I am mistaken and this is how Windows works.


According to another comment (https://news.ycombinator.com/item?id=34013844), macOS will pause, reallocate, and un-pause the bus.


Is there one? I know that unplugging a Thunderbolt peer will crash my MacBook Pro (bridgeOS, not macOS, panics). Disappearing buses is an edge case that is little-tested on most operating systems.


Hot-plugging Thunderbolt devices is hardly an edge-case on Macs, it's a heavily advertised feature.

I had about weekly kernel panics or machine freezes (would not wake from sleep) while unplugging my Thunderbolt dock (with displays and lots of devices) all throughout the USB-C Intel Mac era, but they all went away when I got an M1 Pro machine, so I wonder how much is down to OS design vs drivers vs hardware (vs how specific hardware influences driver design).


M1 machines don’t have bridgeOS at all, right?


If it can panic apparently they do have it.

https://en.wikipedia.org/wiki/BridgeOS says it runs the Touch Bar.


The M1 lacks the T-series coprocessor altogether and all its functions are inside the main SoC. Whether that means that bridgeOS still runs on non-architectural cores inside the SoC, or its responsibilities have been rolled into macOS, I have no idea. I do know that only my T2-having MacBook suffers from these panics.


Raises the question whether or not anyone writing this kind of software actually uses it.


Someone should tell the author about the great lshw, it lists full details of the system board (and everything else) without having to go to a GUI display.

  azalp
    description: Desktop Computer
    product: B660M AORUS PRO AX DDR4 (Default string)
    vendor: Gigabyte Technology Co., Ltd.
    version: -CF
    serial: Default string
    width: 64 bits
    capabilities: smbios-3.4.0 dmi-3.4.0 smp vsyscall32
    configuration: boot=normal chassis=desktop family=B660 MB sku=Default string uuid=dotdotdot

People posting those CPU and SSD screens shots rather than, say, some linked data that contributes to make a graph to forums has such a deleterious effect on people's ability to reason. I suppose eventually we'll end up screen scraping the screen shots.


Also, dmidecode is very handy.

  # dmidecode 3.4
  Getting SMBIOS data from sysfs.
  SMBIOS 2.7 present.

  Handle 0x0002, DMI type 2, 15 bytes
  Base Board Information
          Manufacturer: Dell Inc.
          Product Name: 02##K5
          Version: A01
          Serial Number: /BGK####/CN#######/
          Asset Tag: Not Specified
          Features:
                  Board is a hosting board
                  Board is replaceable
          Location In Chassis: Not Specified
          Chassis Handle: 0x0003
          Type: Motherboard
          Contained Object Handles: 0


This. I generally have few problems with the people I work with one exception is when they start posting screenshots on Slack. If it's some gui and you can't get a good export then I understand the frustration - but this will be a screen shot of terminal.

Why? I mean do you hate the future? Did a month from the present beat you up as a child? Perhaps when you were 10 years old 2 years from now kicked sand in your face and now you're going to punish it. I don't know. But they need to get over it for everyone's sake.


In tech support, I'd frequently get screenshots of a few lines of a terminal window pasted into PowerPoint slides and Word docs — usually via some heavily compressed remote desktop-running-on-another-remote-desktop — with the information I actually needed invariably a few lines higher. My favorite was flash photos of a monitor taken with a non-smartphone from fully airgapped sites, where the turnaround for running a command and getting the output would be at least 5 minutes because the on-site user had to leave the server closet and go down a few halls to get back to a network-connected system, copy the photo off the camera to the computer, and attach it to the ticket.

Eventually I rigged up a tool to automatically pull screenshots from tickets via the ticket system's API, then raise contrast on them with imagemagick and run them through OCR. Some red error text on a black background screencapped through multiple layers of compression might as well not exist, even if it's perfectly readable to the end user, so I'd even done comparisons (that I've since unfortunately lost) of how different command-line OCR tools fared with low-contrast color text, because Tesseract wasn't always the most functional option.


Nice to see hinv finally found its way ;) I agree, those neofetch screenshots are more about desktop GUI and such. They don't cover the hardware enough.


This is prototypical of the standard Linux experience, but I'd like to remark on just how much less common this sort of thing is. Modern Linux and modern Linux distributions have a much larger "just work" factor and it's getting better every year in my experience. Slowly and asymptotically, but it is improvement.

This was driven home recently by my experience switching to NixOS. NixOS is brilliant in many ways, but boy are there papercuts everywhere. It's all soluble but it is annoying. 25 years ago this sort of fiddling was fun, but now it's just tedious.


I too am 100% burnt from a quarter of a century of random "Linux evenings". I've concluded that GNU Linux will always be a great CLI server OS, however those past 25 years has soured me into calling the Linux Desktop a toy. I'm 100% done investing my time solving issues just like this one. The biggest lesson I've learned is to simply not trust it to be a daily driver.


But what do you replace it with? Windows also has "Windows Evenings" that I can't pull on my 30 years of experience to fix, and MacOS forces you to do things their way. And I'm a grumpy old man who likes things my way.


I too like things my way, and got sick of having to fiddle to keep things working... I settled on openbsd. I've got two thinkpad laptops (which are well supported), and a vps. I've tried to avoid apps and customizations that are more likely to break/change over time.

Browsers maybe have been the toughest to sort out. I use links for my usual reading, tor-browser for random surfing, seamonkey for sites I log into, and chrome for the couple sites I need to use that don't work in seamonkey. All minimally configured, and all proxied through my vps and use a hosts file blocklist, except for tor. I had an awesome tweaked version of firefox, but upgrades were more hassle than I wanted to deal with. I use the fvwm window manager included with the base openbsd installation, which can be customized using a single text file. I have productivity software like audacity, claws-mail, gnumeric, krita, and linphone. I've also fiddled with things like shotcut, matrix (over pidgen!), some astrophtography software, and godot. I'm not worried about the latest and greatest games - been there, done that.

It's worked out well. The system gets out of my way and lets me focus on what I'm doing. I've shell scripted my os and app/user/website configurations (except claws-mail, those configs are backed up), so I know everything I setup is reproducible, and all my tweaks are documented. I still get to fiddle, I just prioritize stability over time, as opposed to worry about the bleeding edge.

Prior to this I've used chrome os, debian, freebsd, redhat, and slack over the years. Gave up on windows a long time ago. Chromebooks are low-maintenance, if you don't mind google in your business. Never owned a mac, but don't think a profit-driven os would work for me anymore. My phone is a refurbed pixel 3a running graphene os. ftw! ;-)


Really? I Haven't had a show stopping issue with linux for over a decade.


As user I would just return the device back and purchase a reliable USB based device.

As a Mac-user would do?

I guess a Windows-User would just rate [1/5] stars and don't even research.

We should keep in mind:

Linux users care a lot and do research. Most other will just say "nope" and leave it a issue for the manufacturer or keep turning it off and on again. What I don't know is if the initial situation is to blame upon Linux, Intel (Thunderbolt), device manufacturer or all of them. We have valuable post a the top explaining PCIe-Pause on MacOS and that Linux tries to apply a similar mitigation?

Apple:

     Because Thunderbolt allows the addition and removal of arbitrary numbers of peripherals connected in arbitrary topologies, the task of dividing up the PCI tree’s address space can be challenging. Sometimes, particularly when large numbers of devices are attached, it is possible to exhaust portions of that address space. When this happens, a new device cannot be enabled without moving existing devices.

     To solve this problem, OS X v10.9 supports PCIe Pause—a special power management state in which all driver and device operations are temporarily suspended. Whenever address space exhaustion occurs, OS X may ask drivers to pause operations. After the drivers are paused, OS X changes the address space layout of the paused devices to make room for new devices, and then tells the drivers to resume normal operation
Oh.

    If your driver does not explicitly declare support for pausing, your driver will never receive pause requests. As a result, devices may fail to appear when the user plugs in additional devices, particularly on hardware with multiple Thunderbolt ports. For this reason, you are strongly encouraged to support this functionality as soon as possible.
Ouch. On Linux this shouldn't be an big issue because the kernel provides them.

From the mentioned Linux patches:

    Currently PCI hotplug works on top of resources which are usually reserved:
    by BIOS, bootloader, firmware, or by the kernel (pci=hpmemsize=XM). These resources are gaps in the address space where BARs of new devices may fit, and extra bus number per port, so bridges can be hot-added. This series aim the BARs problem: it shows the kernel how to redistribute them on the run, so the hotplug becomes predictable and cross-platform.
Fits into what Apple describes. The patch for Linux would be helpful. And I've the feeling that the approach Intel has chosen to add TB upon PCIE isn't bulletproof?


I wonder if this is why some of my monitors don't come back on after sleep. Is there a way to tell if macOS did the pause via the logs?


I don't think that really fits together. PCIe (and PCI) hotplug has existed for a while and topology changes aren't new either. ExpressCard for example has done this, as has PCMCIA. Older RDMA buses did this too, as do backplane-based industrial PCs of which there are a really large amount.

I suspect that end-user smoothness based on 'the user is not required to know everything' that makes the likes of Apple implement bus pausing for dynamic topology assignment and BAR adjustment is not available in Linux land because there simply isn't a big enough overlap of people to make this a hot topic.

You need:

  1. A person who understands how this works
  2. A person who understands what they want to use it for
  3. A person who understands what person 1 has to do so person 2 can use it
Usually you get 1 or 2, sometimes both, but almost never 3 except at places like Canonical, RedHat, SuSE etc. because it's too much of an analyst role and not enough of a "I need it for myself and I can build it" role.

Similar but different problems exist in other software areas like when person "3" is split in "business", "end-user" and "licensing" with competing interests. That's where you get the NT kernel which gets split into arbitrary partitions where based on an integer configuration it may or may not want to address your RAM.

Same goes for the old "aperture size" for GPU memory transfers, and later on the BAR resize support. It was never 'hard' to implement, it's just that IBV's didn't bother and mainboard manufacturers didn't care. Yet it was always available and even Tianocore EDK2, Apple's own EFI and (for some reason) BIOS and UEFI from Quanta and Supermicro all did support it just fine. Same goes for KMS and non-blink GPU switchovers where AMD, NVDA and Intel used to constantly sell it as 'impossible' and we all just accepted that. Yet KMS, the MobileFramebuffer (and the old AppleTV Gen 1) and even VesaFB showed that it's totally possible and it's just everyone using the same joke sample implementation from the vendor that's causing it.

Another example would be VESA where DisplayPort topology changes on the control channel side do similar things to PCIe bus pausing. The display controllers and bus drivers should pause on hot plug to let the host decide on the new topology, but implementing that costs time and effort, and you need to have some in-depth knowledge on both the hardware and software side, so lots of companies don't bother. Result: some host+display combinations only work after restarting either end to force it to re-discover the current topology. This even happens in 1:1 topology scenarios where a simple GPU driver update might restart the host bus and the display ignores it and simply stops receiving data until the watchdog timer restarts the embedded processor causing the screen to blink. I'd just dumb low-quality choices and corner-cutting that causes this.


Been hearing “modern Linux is much better” since the 00s


When you're comparing against Linux from 1995, it's not a high bar to hurdle.


In my experience this is true and Linux has come a long way. But given how far it has yet to go it may continue to be true for the next couple decades as well.


The thing is.... You found a solution and if you wanted to understand you can always look at the kernel doc. When a similar thing happens in Windows, you are just done... Not only you have a very slim chance of not having to reinstall everything but even less understanding what is happening (ie. Probably an intern that committed something in some close source driver than will be silently fixed some day).


If you are someone who can read and understand the kernel doc then I agree that you can be more efficient in Linux.

For the vast majority of the population the response to errors is identical on both platforms: reboot, internet searches, learn to live with it, or reinstall and hope for the best.


Similar things rarely happen in Windows because Windows has a stable kernel ABI and developers can, do, and are incentivized to supply drivers for their hardware.


I've been trying for a week now to get Windows to properly suspend and stay suspended on a Thinkpad X1. Even after changing the BIOS to "Linux" mode (S3 Sleep), fiddling with drivers and the registry, and issuing powercfg commands it's still flaky. The function keys to change audio settings also work poorly sometimes. These are the kinds of issues we used to have when trying to bring up Linux on a laptop. But now it's flipped. Linux on Thinkpads works great for me out of the box and I even get to set it up just like I want it (e.g., tiling WM).

For my own use I'd have just given up and installed Linux on this machine as well but I have to dogfood this for everyone else in the company. I'm starting to try out using web versions of everything, particularly Microsoft Office, in the hope that ChromeOS is actually a hope for a usable desktop for unsophisticated corporate users.


Yeah, I manage Windows and Linux computers at the same time, and Win machines have their fair share of WTFs too. Update is horrible for example, I had two cases over the years where it just borked itself, on systems that have the least amount of tinkering possible - one was even a preinstall by the manufacturer. After many hours or struggle, I found and downloaded the Windows Update Troubleshooter, which fixed the first PC in a short time. But the second time I had this issue, even the troubleshooter gave up, and the system just wouldn't update. Tough luck. Linux at least doesn't treat me like a child that broke a vase in the living room.


I had a major troubles with linux suspend on a 2021 T14 - not a cheap system - and did not get past them. Moved to x1, no problems. Colleague has the T14, still running windows.


Check the latest video from Linus Tech tips on Youtube, they did a deep-dive into the non working modern sleep on Windows.

The conclusion was, as you've already tried, change sleep setting in bios to "Linux-mode", or just get a mac.


>This trick is unlikely to be useful again.

Ahh, but now others can benefit from your work :) It's a system or a community. The wizards hand off to the communicators and enable more users.

>In the meantime, the best I can think of is to pay for my distro, report bugs, and email manufacturers for Linux support.

Hell yeah! and keep posting technical solutions. Even the little things you do like keeping your site very readable, posting the text instead of screenshots so it's searchable, etc. all help.

>dkozel and your kind, whoever you are, wherever you are, and whatever you are doing right now, you are legend.

You too, man. Remember Gandi's quote: “Whatever you do will be insignificant, but it is very important that you do it.”


This post resonates strongly with me. I love the term "a linux evening." This was precisely my experience when I used Linux full time: mostly it worked great, but then occasionally something wouldn't work (some personal examples: touchpad doesn't work after OS update, wifi card stops working etc.) and then I have to spend a few frustrating hours debugging the issue. All I can think in these moments is "you don't get this time back. Is this really how I want to spend three precious hours of my life, when, if I used a different platform, I could avoid this hassle completely?"

I know it's a tradeoff and I sacrifice a lot to live in my current Macintosh rut, but I just don't have the motivation to be my own DIY tech support wiz after a full day on computers for work.

EDIT: as pointed out by others, one headline takeaway from this is that the author was able to fix the problem at all, which s/he may not have been able to do on Mac/Windows (though it's much less likely the issue would occur in the first place).


> if I used a different platform, I could avoid this hassle completely?

Where is this mythical platform?

If something like this happened on Windows, you'd just end up clicking around arbitrary settings, googling desperately, and rebooting over and over again, until you reinstalled the entire OS. That option is also available with Linux.


I was reminded of this XKCD:

https://xkcd.com/349/


Couldn't agree more with the takeaways. Especially #3 - and it is exacerbated if you are a finicky person who wants everything exactly the way you want it + loves to customize and tinker. Linux basically keeps me hooked by virtue of catering to those needs even though I have to deal with these Linux evenings (which are thankfully getting rarer over time). In time too because my tolerance is depleting as I age and have decreasing bandwidth to deal with all that. Also the reason I always dual boot with Windows.

I also have a somewhat interesting usecase that doesn't let me switch to Mac (not that I am keen to). If I buy a Mac, I will have to switch my phone to iPhone too because integration with Android isn't nearly as good. However I _must_ have a dual sim phone (which is fairly easy with Android) and there aren't any dual sim models of iPhone.


> and there aren't any dual sim models of iPhone.

Aren't phones with an eSIM + regular SIM effectively dual-SIM? I'm pretty sure iphones have those.


I cannot recall the specifics but I remember that there were some issues with eSIMs for certain use cases (possible that those were teething troubles that have been resolved).

Also, I like the flexibility of using the physical SIM because I can easily use them with both smart and feature phones without issues.


There’re also real dual-SIM iPhones, but they’re market restricted (typically to China).

Docs for SIM + eSIM: https://support.apple.com/en-gb/guide/iphone/iph9c5776d3c/io...

Docs for dual physical SIM: https://support.apple.com/en-us/HT209086


I used to have a dual physical SIM iPhone X from China! However, when I sent it in to get (warranty) a local Apple repair shop screwed up and replaced it with a North American model. After much gnashing of teeth with Apple Support, I wound up with a new iPhone 12…but no dual physical SIM.

Now my phone is still confused after restoring from backup and cannot see the eSIM. I’ve been filing feedbacks and radars to no avail. If I was on Android I’d have just rooted it and deleted the offending (mis)configuration file already!


Not sure if this would help, but I recently learned about a product called iMazing[1] that lets you browse and edit iTunes backups.

You can't access the whole filesystem of a running iDevice with it, but you can access the whole filesystem of the backup. So I suspect it might be possible for you to clean-up the backup offline & then restore it. (At a minimum it should enable you to extract the valuable contents out of the backup and manually restore it on top of a clean iOS install.)

I've used it with some success to extract info from the photos database. I just wish they had a consumer-facing (read: more affordable/perpetually licensed) version of their CLI tool.

[1]: https://imazing.com/


Yep, my SE2020 is like that, and was sold in the US


What kind of android integration do you require, out of curiosity?

I use iPhone + Mac, but I don’t remember any hiccups when it was android + Mac. Most of my interaction between the devices is pretty indirect anyway (photos are shared through google cloud, etc).


The main one I have is Syncthing[0]. I have a fairly involved file sync setup across several Android and Linux devices and it works beautifully; I like that I can use the space available on my devices instead of having to depend on/pay money for cloud (I know that it is not much for most of us). However, to best of my knowledge, Syncthing doesn't run on iPhone (it is possible that jailbreaking etc can make that work but that is a whole different area).

[0]: https://syncthing.net/


I use stock ios and mobius sync works pretty well. No issues so far over 1y+ of use.


Thanks - I'll check it out.


I’m a proud and happy Linux desktop user and rarely have significant issues like the one described here. I attribute that to using a “boring” distro (latest Ubuntu LTS’s only), and “boring” hardware (a Dell XPS 13 which is known to have good Ubuntu support). I just don’t have the spare hours to throw at getting basic system functionality to work, so I make software/hardware choices accordingly.

Occasionally I will have a Linux Evening, and I attribute it to the fact that I didn’t pay for my OS and can therefore only expect so much. After all, you only get what you pay for.

That being said, I will never leave Linux because using an open source operating system is completely worth the occasional hassles it brings me.


I have also a XPS13, it is quite good, and I never shutdown/reboot it. Only let it in sleep mode when needed. For months.

There is still a recurring issue that I was encountering, that is similar to the one reported by Fabien Sanglard. It is with the small usb-c to usb-3 thunderbolt accessory. It is working perfectly, and suddenly, after months, it will not work anymore with anyhting plugged to it, without reason.

In that case, in the syslog I can also see that the "memory space" is exhausting, with something "BAR" and it looks like that it is not able to allocate ports anymore.

Before, in such a case, I was forced to reboot and that was a lot of frustration for me, to lose all my contexts. And most often at the wrong time when you are in a hurry.

But, recently,in a Linux Evening, I found a command that "magically" resolve my issue when it is happening, without having to reboot:

  for i in /sys/bus/pci/drivers/[uoex]hci_hcd/*:*; do   [ -e "$i" ] || continue;   echo "${i##*/}" > "${i%/*}/unbind";   echo "${i##*/}" > "${i%/*}/bind"; done
This will deinit/deallocate all the usb internal hubs (usb2/usb3/...) freeing the ports that were allocated (leaked?) for them. And then reset them. Then everything is working correctly again.

Even if it is not great to have this issue, this is the case when I like Linux, when there is always a way to fix your issues without having to reboot.


Yes, sometimes you land in these situations in Linux, but many times are self inflicted by unnecesary tinkering. I cannot see why dealing with all the … unwanted things from windows can ever be better. With mac if it is too new it will not be supported and after a few years you need to upgrade your hardware or are left to die. Apart from being an exclusive system for the rich.


Nail in the coffin for me yesterday when I restarted my computer to boot into windows and it had a splash screen begging me to "upgrade" to windows 11. I click no, another splash screen begging me in other terms. I click "no" (actually, no, it was probably more like "some other time" or "remind me again in the future"), ANOTHER SPLASH SCREEN.

Mind you I have my grub set to boot into windows by default so that if there's a power outage, my game streaming and backup setup, which is on my windows partition for now, will relaunch if I'm traveling or something. Yet if it had booted into that stupid splash screen, Steam and etc won't have been able to launch, and I'd have been SOL!

I hate windows!!!!! I can't wait to get EVERYTHING onto linux. I have a todo to see about using proton compatibility for desktop gaming, it works great on steam deck. After that the only remaining thing is ableton, and I'll have been completely freed from windows forever!


This is why I run Fedora Silverblue. My Linux is now an appliance that upgrades atomically. If something breaks, I rollback. Once in a while, I pin a version that is known to work so it never gets garbage collected.

The other day I changed /etc/ld.so.conf and the system would not boot—my fault. I choose the previous deployment in GRUB and I had a working system back.


NixOS feels the same but somehow... I still end up in those Linux evenings but for different reasons...


MX here. Rock stable for my purposes. I have one Windows system for some proprietary stuff I need. Other than that it's been Linux for about 10 years now.


> I cannot see why dealing with all the … unwanted things from windows can ever be better.

Because the amount of time spent on unwanted Windows things is radically less than the amount of time spent on Linux evenings.

Windows does by and large just work. Every OS has warts and bugs. I want to spend the absolute minimum time on them.

Linux is getting better and may not require too many Linux evenings these days. But that’s really only true if you’ve burned hundreds of hours dealing with increasingly obscure Linux errors.

It’s not too hard to understand why different people prefer Linux, Mac, or even Windows. They all have pros and cons.


A lot of my problems are from tinkering as well but that's why I am using linux, to control it better and tinker with it. Why won't I just use macos if I wanted something I can't tinker with? I think people lose sight if why Linux even exists to begin with, it is all about user control and ability to tinker/hack it.

I think better UX and more configurability would solve this stuff (from a friendly interface). The reason desktop/distro people won't do that is because it is considered bad UX practice to have too many decisions for the user. But imo, a Linux user would be an exception. In an ideal world, whoever decided to disable the boot flag in this case should have provided an easy to access distro/de setting to flip it back on.


Yes, a lot of times when i have «Linux problems» is because i am installing it in hardware that was not made for it. For example, old Macs, but usually after an evening i get decent results. Have few old macs with Lunux that work nicely. :)


Hi folks. :)

I'm so glad that my hit-and-run post has been so useful. After seeing Fabien's blog post I did a quick search and it turns out that the solution has spread fairly broadly to other forums. My choice of 0x33 was arbitrary so makes a nice canary for seeing it spread out. I'm thrilled. Sharing experiences and solutions is so essential to learning. I've benefited enormously from the generosity of open source developers and communities and from individuals documenting pieces of their projects, glad to have raised the ocean a little in return.

My use case was (and remains) having a Xilinx Artix 7 FPGA in an external Thunderbolt 3 enclosure for testing the development of DSP accelerators using open source tooling. I didn't want to have the FPGA board inside the PC to be able to swap it to my laptop easily, because it produces a lot of heat, and so when I misused the PCIe soft core (litePCIe: https://github.com/enjoy-digital/litepcie/) it doesn't take down the OS. Being able to reload the FPGA and effectively hotplug the device has been very helpful.

Since I knew my issue was around hotplugging I searched for information around PCIe hotplugging and I think (it was two years ago...) that I found the answer from one of these two threads. Both mention the option of reserving PCIe addresses for hotplug busses as a workaround, and a workaround was all I needed.

https://www.spinics.net/lists/linux-pci/msg64841.html

https://review.coreboot.org/c/coreboot/+/35946

dmesg and the various kernel logs are my first stop for any odd behavior on Linux. Especially with any state change to a device (plugging in, turning on, removing, reconfiguring etc) the kernel logs tend to give invaluable info.

I had already been looking at eGPU forums to choose the Thunderbolt 3 enclosure (ended up with the ORI-SCM2T3-G40-GY) and there were various discussions of hotplugging issues there, but I don't think I found the specific kernel options to fix it there.

Check out this docs page for the kernel parameters: https://docs.kernel.org/admin-guide/kernel-parameters.html

For the string "pci=realloc,assign-busses,hpbussize=0x33"

`realloc` Enable/disable reallocating PCI bridge resources if allocations done by BIOS are too small to accommodate resources required by all child devices.

assign-busses [X86] Always assign all PCI bus numbers ourselves, overriding whatever the firmware may have done.

`hpbussize=nn` The minimum amount of additional bus numbers reserved for buses below a hotplug bridge. Default is 1.

0x33 (decimal 51) is arbitrary, but large enough that I was unlikely to ever exhaust that address range even with a large number of devices chained together on the Thunderbolt bus. I think (though would have to check) that the address space does get exhausted with multiple hotplug cycles. I haven't hit that issue since I shutdown the computer close to daily to save power.

Sadly, there's no Segway, those things are expensive and I have a long wish list before reaching that point. Currently I'm saving up to get a microscope, I have a large box of RF integrated circuits that I'd like to do some show and tell with on Twitch. :) Also no unmarked 40% keyboard, RSI means I'm a total fanboy of the Microsoft Ergonomic keyboard and the Anker vertical mouse. Cheap and so comfortable. I have done some kernel compiling, mostly to learn more about kernel modules as I've been trying to make the learning curve and user experience of experimenting with FPGAs over PCIe easier. If anyone has some experience with DKMS and creating Debs I'd welcome a chance to chat. Ditto if there's any Debian maintainer with experience packaging kernel modules, I made some headway a while back to repackage linux-gpib but got stalled out on a few of the details of maintaining patches against the upstream.

Cheers and Happy Holidays!


The internet is amazing! The way it allows people to connect, help each other and sometimes also reconnect is heartwarming.

If you have a PayPal account, I will gladly contribute to your microscope fund!

Thank you again, and happy end of year!


zxmth here. I'm glad to finally know that 0x33 was arbitrary :P I'm also glad that old thread on Level1 has sparked this conversation and connection around a shared Linux challenge. Thanks fabiensanglard and dkozel! Happy hot-plugging to you both!


Thanks Fabien!

Also I love all your retro work. I have a NeXTStation Turbo Color that's my retro pride and joy. Have you seen or been involved in any of the Demoscene activities? It's amazing the graphics that people are able to squeeze and abuse out of older hardware.

Off topic, but folks might enjoy Poems for Bugs, a talk where Linus Åkesson talks about cycle accurate coding to exploit GPU bugs in the C64. https://www.linusakesson.net/programming/poems-for-bugs/inde...


This has got to be one of the greatest exchanges on the internet, ever! Kudos to you both! And thanks to HN for connecting all of us.


Heartwarming, thank you so much for sharing this experience with us. :)


> dkozel and your kind, whoever you are, wherever you are, and whatever you are doing right now, you are legend.

Both of you are :-)

Heart=warmed. Happy holidays to both of you!


Hi dkozel, your post prompted me to look into the Microsoft Ergonomic keyboard and Anker vertical mouse as I do often feel a tiredness in the lower arms and shoulders, which I haven’t paid attention to so far, but will now.

What does RSI mean?


Repetitive Strain Injury, which is an umbrella term for a ton of different conditions but they generally involve pain from doing a certain activity too much or unergonomically


I had a similar experience recently. Updated my Manjaro setup, rebooted, and was greeted with text mode. X just didn't load. 1 hour of reading logfiles and pondering about why the kernel module wouldn't load, it turns out I needed to add an obscure boot parameter (I think it was something like "ibp=off") to my kernel command line to make the nvidia module load again.

Easy solution if you know how to solve it, 1 hour searching for that solution, and about 0% chance of that knowledge ever being useful again in the future.

In the end I was still satisfied because at least I learned something (no matter if it's useful or not), and I was under no pressure to get that particular Linux system working again (I could still dual-boot Windows for example).


I had a similar experience with a workplace machine. It worked flawlessly. But, sometimes, I went to make myself coffee or to the bathroom, and when I was back, the system just never woke up from sleep.

I try setting the machine to sleep and manually waking it up. No problem.

Later, I spent some time watching a long youtube video, then I opened the terminal and then the OS crashed.

Dmesg? It showed nothing

But, sometimes, when I tried to launch the terminal, I tried running htop and then the OS hanged. I had the feeling that it was something HD related.

I restarted the machine, updated the kernel, asked the forums, everything. No answer.

It was my work machine. They let me install Linux but I couldn't spend so much time with the OS. I need to have work done. Otherwise I need to bail and go back to Windows (ugh).

I was about to delete Linux from the laptop, but I had an idea: The machine was a Thinkpad T1. I looked on the Arch Wiki about compatibility with that laptop. If the problem came from an incompatibility with the hardware, it will surely show up there. The machine showed up in a list and according to the wiki, there's nothing related to a crash or a freeze.

Then, I looked up on lshw if there was any piece of hardware that was different of what the Thinkpad has by defaults.

The result? A Kingston 1TB SSD. I googled "Linux Kingston ssd lock up", "Linux Kingston ssd crash".

It turns out, some Kingston SSDs don't support all the deep sleep levels of energy saving. I needed to switch some off with a kernel parameter.

I never had any problems with that machine again.


This is one of the benefits of being "obscure in only one area at a time" - just like you shouldn't break more than one law at a time, heh.

Linux is a relatively obscure OS, but you were on popular hardware with a popular SSD which helped increase the chance that someone else had run into a similar issue.


I had this issue after I built my first new machine in nearly 8 years.

I forget what I googled, but I essentially found a forum post almost immediately that explained the reason for the ibp=off, something to do with an intel security issue, caching, and the actual likelyhood of it being a security risk on a personal machine. The conclusion was basically "its enabled by default because its a good idea, but not really needed for most use-cases". Away I went disabling it, somewhat understanding why it was there.

But I've also been an arch user for years, and one thing I'm accustomed to is when the bleeding edge breaks something, other bleeding edge users are already discussing it. The overall community documentation on issues is exhaustive. If you have an obscure issue in Windows and search the error, you get windows support forums with official support asking you to click the "try to fix it pwease" button and no other solution. Half the time the only fix is a reinstall.

I ended up building a new computer because my old machine's windows install got itself in a state where it would fail to login after an update, yet constantly tried to apply that update after I'd revert it. No solutions anywhere online, no way to stop windows from trying to apply the ill-fated update. Just a broken install. I figured if I was going to spend hours re-installing and reconfiguring windows, I might as well get new hardware.


Anymore these days if it only takes me an hour to wade through all the reinstall/switch distro garbage to fix some obscure nonsense problem I count it as a win.


For takeaway 2, I've found using NixOS lately to be good at keeping myself disciplined about documenting the odd little hacks I've added to fix an issue. Like in this case, I'd have to add something to my boot.loader.grub.extraConfig setting, comment it and document it with a hopefully-useful commit message so that if I encounter something similar in future, I can at least look at those bits of documentation and hopefully connect the two.


+1 on this. I use to consider those Linux evenings as a waste of time. But with every bit of NixOS tracked within git with some history and context, I now think of it like an investment for the next 50 years, which has way more chances to pay back


I don't use NixOS but I have a similar documenting approach to all my OS and $HOME config files. Every customized file / override is checked in to a git repo, and there's an `install.sh` that takes each such file / directory and copies / symlinks it from the repo to its target location. Basically the same concept as dotfiles, but also for system files. I've restored a couple of machines from OS reinstall to full readiness this way.


I used to hear "Linux is only free if you don't value your time." - this article seems to validate that this is still the case. Am I getting the wrong impression?


I'd say its a bit biased.

Personally, I've been riding the linux struggle bus for years in my personal time. It really hasn't been a struggle, but weird issue's I've dealt with during my "Linux evenings" has made obscure struggles at my day job far easier to debug and deal with, and overall has made me a better engineer because I know how my tools work and I can use them more effectively.

It's a whole lot more fun learning about kernel drivers, systemd, networking drivers, etc when you're personally invested in tweaking your machine just right. I much rather deal with a Linux issue for an hour than spend an hour on leetcode.


It really depends upon what you are doing with your machine.

If you're dealing with new hardware, situations like the one described in the article may pop up since vendors will focus their testing on Windows (and possibly macOS) so there is a good chance that it will work on the commercial operating systems but not the open source ones. That is going to be especially true to more exotic hardware since there will also be less testing by the open source community.

On the other hand, I find managing a Linux system significantly more time efficient. Some of it is because of the nature of open source. For example: installing, updating, and removing software is much easier since the licensing model allows distribution maintainers to create a universal software management tool. In other cases, it is simply because of the approaches typically taken by open source developers. For example: it is usually easy to copy configuration files from one system to another.


I've had linux evenings, windows evenings, macOS evenings, hardware evenings, software evenings, house evenings.

Anything you use has a chance of failure and on the whole I feel Linux has taken less time to do what I want to do with it than other similar options.

Thought I started using macOS at home because I got tired of dealing with Linux desktop shit, so take that as you will.

> “If you’ve ever read a mystery story you know that a detective never works so hard as when he’s on vacation. He’s like the postman who goes for a long walk on his day off.”

I got tired of taking the long walks.


Yes, every OS has problem. The only difference is that Linux let you debug it.


100%. A similarly snarky corollary could be that Windows is not free, and Microsoft still shafts you regardless. From what I gather, is that if you'd like the least sucky UX while not leaving the designated path, Apple is the way to go. Otherwise, you're going to have "X Evenings". And even Apple land, despite the fame, is not without its weird incompatibilities, dongles, random issues and such.

To me, a lot of the frustrations boiled down to familiarity. I used to be touchy around Linux, because it was weird to me all the time, switching from a decade plus of Windows experience. Whatever broke with Linux, it was a Linux problem, and whatever broke in Windows, it was a stupid computer problem. Do you see the bias? Now that I have the user experience, Linux issues are familiar, and whenever there's anything that's out of place, I know where to look, what to search. And Windows, with its penchant to reorganize the Control Panel every release, seems like a maze to me, where I get more frustrated at every turn.

At the end of the day, it's pick your poison really.


“You get what you pay for.” has always been my reaction to Linux issues.


1 - I wonder what OS do you use.

2 - Have you actually tried Linux?


I have. It's still inferior to Windows and it's trying to be a discount version of MacOS. And it's doing a bad job at it.


I don't know about that. I use xmonad on linux and it completely dominates anything windows could ever try to bring to the table.


I hate to admit it but GNOME is legitimately a macOS ripoff. I love it, and it's great, but it's basically trying to be macOS.


Inferior how?

My KDE Plasma setup is nothing like MacOS.


So it would seem you haven't used Linux recently if you consider it inferior to Windows.


I use it everyday. It's beyond inferior. Terminal is still needed for a lot of things that should have a GUI. Gaming support still inferior. Applications that only exist for Windows and/or MacOS. Linux had it's chance on desktop. Especially during Windows 8 trash era. It blew it. It's game over.


I am afraid your comment does not encourage discussion. You can try justifying your assertions. Otherwise we just talking about feelings. I find the open source world gives me hope on humanity, in opposition to the sensation i get from Mucrosoft or Apple. I find learning the command line rewarding, that is the strenght of Linux IMO


Been using arch as my first and only *nix for about 4 years now, most of the pain points have been smoothed out, usually you can find answers by searching whatever errors you see or checking dmesg. If you're a CS person, it really isn't hard to solve most issues and reason about, it's just normal debugging / docs / forums. In the event that you're doing something weird and niche, or you really cant find your info, you might reach out to the community and it's likely some linux wizard will solve your issue, like this post.

I was much more frustrated on windows when i ran into issues and your best bet was some incompetent support, and watching blue screens til you come to the conclusion that your system is scrapped, who knows why, and you need to reinstall and start over from scratch.

On arch you can save your dotfiles, make snapshots, chroot to fix things, and so on, there's tons of safeguards you can put in place, and tons of options for remediation. You really just need to RTM.

On top of that if you have a gripe about some software not working how you think it should, you can usually hack it or replace it. If you have an idea for some new functionality you want you can usually find/install some existing software for your needs in like 2 seconds, or you can find the repo of your software and submit an issue or PR.

Alternatively you can also just string together existing tools in a script in order to fulfill your needs. I did this recently to create an OCR screenshot to clipboard tool using tesseract and flameshot, something I wouldn't have even considered possible on windows.


If OP used Windows exclusively, he'd have Windows evenings. There's tools that people complain about, and tools that people don't use.


Yeah. Anecdotally I once spent a "Windows Evening" trying to figure out why a game would freeze at the controls remapping menu. Found out that Windows' own game controllers menu was experiencing the same freeze. I went so far as to reinstall Windows entirely, only to find that the issue wasn't resolved. Started disconnecting devices out of desperation, only to find out that the issue was caused by the driver of a USB DAC I had connected...

Computers suck, in general. I have a hard time seeing an attempt to leverage blame on Windows or Linux as anything but an excuse to start a flame war.


They really are a mess, if you dig a bit deeper. Many of the OS issues are actually even bigger hardware issues, which the OS failed to amend.

My gripe with Windows is usually not related to the operating system itself, but rather the direction it's headed as a product. And I also don't like that when people say that Linux is so tinker-y, compared to Windows, which "just works". A system that "just works", in my experience, is a system that the user is used to. Familiarity makes the issues seem trivial, and the unfamiliar issues seem insurmountable, which hardly results in a fair comparison.


Windows just works without a hassle far more often than any linux distro I've installed.


My experience is different, my systems usually work, but there have been quite a few head-scratchers with both Linux and Windows.


My experience is precisely the opposite.


This reasoning implies that every OS is equally buggy in ways that impact usability. I think this is likely untrue.

I don’t think it really matters if windows bugs exist, but how often they occur and impact use of the tool.


That implication would be a bad one, but in the specific case of a bland Linux distro (let's say Debian) vs. Windows I haven't ever seen much difference between the number of problems you run into.

The major difference for me is that Linux problems are always theoretically operator-fixable i.e. the information needed to fix your problem is available to you, even if you're not technical enough to understand it. The means to fix Windows problems are frequently not accessible at all, anywhere.

While I've had plenty of Linux evenings, I've had stuff broken on Windows that stretched into weeks until I finally gave up and reinstalled.


I agree. I don't know if it's down to luck, but I have a very positive experience with Linux, even on random notebooks, and I also really tried a good amount of distros too. I also have an extensive user experience with Windows. Currently, in my head, they are on par. There's stupid shit in both.


I had a similar problem on Windows where it suddenly refused to work with any of my USB C ports. Linux could still detect them however. I did manage to get it working by doing a BIOs update on my motherboard since I noticed that the BIOs update notes mentioned "Improved USB compatibility". The weird thing is it worked in the past but I guess some Windows update changed something somewhere.

I think all OSs can experience these kinds of weird problems from time to time.


> When I emerge from these "Linux evenings", I wonder if the real problem is my attitude.

It isn't. I mostly use Linux and sometimes (work) have to use windows.

The exact same thing happens with windows if you try to do anything remotely techy on it.

The difference though is that "Windows evening" often end in failure because at some point, you hit the wall of a binary blob whose source code isn't online.


I really appreciate the write-up, as I have had many 'linux evenings' myself, and it always gets on my nerves when some members of the community act as if linux 'just works' for almost all users. Worth noting I love linux, and am a happy user, but I have had quite a few instances of relying on a more-experienced person previously running into the issue, or else I would still be stuck with those previous problems..


It does just work for most users. They don’t get online and post 2000 word blog entries about their successfully completing their work for the day with no hiccups.



Lol beat me to it :-) Glad someone was inspired!


Hey, I'd love to read yours. Please write it!


This has generated a rabbit hole of epic proportions. My personal list of “I’ll get around to that someday…” Here I am installing nginx on my server in my basement…brb. I have to go finish that Symfony build of my personal site. I’ll be a minute or two.


About take away #2:

> I spent several hours fixing a problem and I learned next to nothing in the process. This trick is unlikely to be useful again. By the time I encounter something similar, it is likely I will have forgotten about the solution.

This is why it is important to keep a personal knowledge base/documentation. Mine is full of things I'll probably never encounter again, but I know it may be useful one day or another.


Also, it's not like OP haven't learnt about kernel parameters. Often not just the dicrete knowledge itself, but also the way of arriving at it is just as useful.


I would title the article "A thunderbolt evening". I haven't had great experiences with Thunderbolt even on Windows. ^_^


Many people don't realize thunderbolt is an incredibly low-level and complex protocol/setup. It's not like USB or firewire, it's an entire additional PCI bus thing.

It's surprising it works anywhere near as well as it does.


My last (Arch btw) Linux Evening was my laptop suddenly not booting after an update. I thought it must've been me, so I booted into the install media and reinstalled and reconfigured grub in a way that I knew would fix it...except it didn't.

I then learned that GRUB had a bug introduced in an update that completely borked it for my system. So rather than deal with it I finally used it as the excuse I needed to ditch GRUB for systemd-boot, which if anything is actually simpler.


I have had my share of Linux Evenings. They built my confidence when it comes to troubleshooting system stuff, and they eroded my confidence in Linux for a while. Things are incredibly stable these days compared to even half a decade ago.


The good thing is after an evening you usually find a solution.

While one some proprietary systems the solution is usually "you are fucked deal with it".

Having said that it has been litterally years since I had this kind of problem. Checking if people have issues with devices on linux distros before buying actually works really well. I just don't buy the things that don't work. Kind of like checking at the back of the box if it is <insert os of choice> compatible.


Which do you think is more clear? Half a decade ago? Or, 5 years ago?


60 months ago.


This is kind of the reason I have switched to Mac. I used Ubuntu throughout my CSE degree and a year after. But system bricking at random upgrade, needing a fix was a no go for me at work. I understand some people do not face this issue.

But whenever I wanted to try out something new or install anything, some random stuff would go wrong and would need hours of searching and fixing. Happens much less often with MacOs.


That is quite vague statement to make. Linux let you try/change a lot more on its internal than MacOS, so you can definitely to more damage to your system on linux.

I don’t see you trying simple things like “apt install go” and your pc would stop working (unless faulty hardware).


Sounds like you had faulty hardware.


I'm generally an Apple hardware guy + console gamer so I don't have to have linux evenings like this. I used to enjoy the tinkering, but now I have Linux days in the Cloud and don't feel the need to be a computer therapist at night.

On the other hand I just bought an Intel NUC to turn into a NAS and am planning to try out NixOS. I think I'm going to have a lot of evenings..


Take away 0:

Good error messages are important pointers. Make sure you include good, unique, descriptive errors in software you write.

When you figure out a solution, post it online with the error message so others can search it to make the connection and get to the solution more easily.


Been there... Especially when I was in high school with older Ubuntu flavors and trying to dual boot Windows. Booting constantly got fucked up.


Nvidia drivers, signed kernel modules, and Nvidia signed kernel modules are my own most common sources of 'Linux evenings.' SecureBoot is quite possible on Linux until you need to reach for these, then it becomes a headache.


> I spent several hours fixing a problem and I learned next to nothing in the process. This trick is unlikely to be useful again. By the time I encounter something similar, it is likely I will have forgotten about the solution.

So much of modern programming in general is like this because of the always changing languages, libraries, tools, software, hardware etc. such that when you're coding you have to accept it will keep happening occasionally no matter how experienced you get so you don't get too wound up about it. It's especially discouraging when you're starting out too and don't know this yet.


"I have vouched to avoid USB and its undecipherable specs."

OK.


I agree, it's ironic. I haven't had much luck with Thunderbolt, even on Windows. USB has never failed me.


I almost prepped myself up this Friday evening for a solo Debian reinstall party to solve an extremely slow reading/writing on internal hdd/sdd problem until I accidentally realized the problem disappeared once I had removed my external disk from the docking station. Went from 2mb/s to 2gb/s r. and 800mb/s w.

> I spent several hours fixing a problem and I learned next to nothing in the process. This trick is unlikely to be useful again. By the time I encounter something similar, it is likely I will have forgotten about the solution.

So much this.


I started using Linux with Linux Mint. Then moved to Pop OS NVIDIA edition 2/3 years ago.

I _NEVER HAD_ a Linux Evening.

I only faced problems for my noobness in my earlier years, and help was always a reddit visit away.


> After all, if I elect to use Linux, a niche market by all means, shouldn't I be ready for these kinds of quests?

Yes and no. I think if our only tools are are Google (well, now Kagi for me), Stack overflow et al., and the sites that hold quality content quick to the point, you'll be spending a lot of time on the info finding and not the fixing.

I think it requires a a lot of information organization tools to continue to keep up with it. Or sink more time into info finding. Or just memorize everything if you have a brain for that


>When I emerge from these "Linux evenings", I wonder if the real problem is my attitude

Put another way, "after I'm done being abused, I wonder if it's my fault".


In my experience Linux is easy.

Easy to install. Easy to upgrade. Never breaks. Never gets "infected". Fast.

The absence of commercial bullshit is extremely nice too.

(I like Debian)

(Is it just me or is there a lot of FUD in this thread?)


> (Is it just me or is there a lot of FUD in this thread?)

Lots and lots of 'this is why I haven't used Linux for N years (and thus don't know what it's like)'.

I have to use Windows at work, and the system is way, way less predictable and reliable then any Linux system I've run for a decade. Shit changes out from underneath me (Windows Update, policy changes) in breaking ways all the time, and there's often no rest to inspect or reverse what causes breakage. For every 'evening of Linux' I've had on my life, I have a few 'afternoons of Windows' (mostly hopelessly, blindly turning things off and on again) every week.

When I ask colleagues who are my more accustomed to Windows than I am about how to solve issues, how to achieve particular configurations, or how Windows does certain things, it becomes clear that actually managing their workstations like comprehensible devices which are substantially under their own control is a foregone conclusion to them.

The reality for Windows users is not a system that actually 'just works', but a deeply internalized helplessness that manifests as conservatism (attempting little, assuming many reasonable configurations are not possible), superstition (attempting and advising reboots, constantly, without being able to articulate a real reason), and blindness to the outrageousness of the situation (e.g., thinking it's perfectly acceptable that their laptop can't handle an uptime of 4 days before it starts falling apart, or 'I always shut my laptop all the way down at the end of the day so it'll run better').

It's no wonder that people conditioned by this kind of usage pattern don't have a sense of what reliability or mastery look like in the Linux world, how frequently issues like this do or don't occur, how easy or hard they are to avoid, etc. They live on a different planet, where mastery in desktop computing is worthless because there's just no hope for a predictable, transparent system that doesn't change out from under your feet. It'll just break tomorrow anyway. And on their planet, it doesn't occur to anyone to research hardware compatibility ahead of time; they never need to. So of course the thought of spending an evening rooting out a hardware incompatibility issue strikes them as some mixture of futile, extremely burdensome, and unavoidable/common for those poor Linux users— even when none of those things are true.


I kind of feel like there’s more replies that can be distilled down to “If you prefer anything over Linux for any reason you’re just dumb”, honestly.


There is always fear, uncertainty, doubt on HN when speaking about the benefits of Linux, Firefox, Brave, or Musk.

My propositions, which aren't in vogue on this site:

My Kubuntu system is rock solid and never crashes. Firefox is still a performant world-class browser. Brave's crypto can be disabled and the browser is world-class. And Elon Musk isn't the devil.


One more thing that happened with Linux in last 20 years is complexity raised in orders of magnitude.

I remember Linux 2.x times and I had similar problems with hardware but a lot of them I fixed by following a limited set of tricks/patterns. Sometime I read a source code of some drivers or maybe applied some patches but not so often.

But at that time I felt confidence that most problems I could fix. Now I don’t (and actually switched to Mac 5 years ago)


Side note: I've been working on creating a new blog now that I've completed my name change (and whatever other aspects of my transition affect my public persona). This blog's minimalism is still distinctively styled and I'd like to do something visually similar. The choice to make it black-on-light-gray is somewhat pleasing. Has a consensus been reached on dark styling affecting readability?


Some like it, some overdo it. Apparently many live in darkened caves and use dark mode all day. And for me too much contrast in dark mode means harder to read because the white bleeds into the background. YMMV.


And digging even more, you'll find the same solution somewhere else two years ago: https://www.reddit.com/r/XMG_gg/comments/ic7vt7/fusion15_lin...


> dkozel and your kind, whoever you are, wherever you are, and whatever you are doing right now, you are legend.

Perhaps dkozel is reading this right now. :P

https://news.ycombinator.com/threads?id=dkozel

https://lobste.rs/threads/dkozel


Last comment was in 2020.


Now 2022. ;)


So, how did you find this fix?


cue the segway


No segway, I've posted a bit of a discussion above. Not all that interesting unfortunately. The error message was obscure but the problem space was pretty small so I did mostly the same thing as Fabian and searched online for related terms to PCIe hotplugging and PCIe address allocation.


The saddest thing of all (I think there's some XKCD referencing this) is when you have a technical problem like this and you find some post from a year or two years ago exactly describing the problem. And there's no reply.

Empathy isn't the word. It's feeling like you're trapped on a deserted island in a vast empty sea - suddenly realizing that there's another S.O.B. who's been right next to you the whole time and also knowing that neither of you has the faintest idea how to get off the island.


https://xkcd.com/979/

I swore that this one was about the finding only person who had the same issue but who then had posted 'fixed it' soon after. That's even worse, it's like being stuck on the island and watching the S.O.B. walk on water into the distance.


I started using linux in 1996. I've had a mac since 2010 - it just works, while my sound card would randomly stop working (pulse audio, whatever, I don't care anymore), and I even had to manually patch the alsa drivers manually (and write the patch) . I wish I could use Linux reliably for anything else than a terminal (and I do use it everyday at the office in a terminal ) , but it's just never worked well for me, in spite of my wanting it to work.

I'm a bit sad about Mthe whole thing : I want it to work, but it doesn't. I don't have the time to waste yet another linux evening, yet I can't resent the people who work on it for free, and understand very well that it works for some people.


> “The author, dkozel, never came back to answer. I imagine they typed the solution on a 40% keyboard featuring unmarked keys and then rolled into the sunset on a Segway for which they had compiled the kernel themselves.”

Haven’t laughed that hard in a while.


Even using Arch, I rarely find any big problem using Linux. There are small inconveniences from time to time but, even so, maintenance takes less time/effort than removing bloatware after each Windows 10 update...


1. If you don't encounter any trouble daily driving GNU/Linux, that is actually a sign of inexperience. That or you're doing nothing interesting.

2. If principles mattered more than convenience, most Linux users would be on FreeBSD or OpenBSD instead of GNU/Linux. Either follow your arguments to their conclusions, or understand that Windows/macOS users are doing the same thing as you—making practical tradeoffs.

3. Unless you are doing tons of system administration on your daily driver, most of your time is spent in applications, and the choice of operating system doesn't matter too much.


I'm gonna disagree on everything here...

> 1. If you don't encounter any trouble daily driving GNU/Linux, that is actually a sign of inexperience. That or you're doing nothing interesting.

I do loads of interesting stuff on linux, I use it for research, numerical computing, biology etc. None of these things cause linux to skip a beat.

> 2. If principles mattered more than convenience, most Linux users would be on FreeBSD or OpenBSD instead of GNU/Linux. Either follow your arguments to their conclusions, or understand that Windows/macOS users are doing the same thing as you—making practical tradeoffs.

The world is not black and white! There are degrees to which people behave, and there is a spectrum of behaviour that abides by certain principals and doesn't. I had someone say to me recently that they didn't believe there are 'ethical people', instead people make use of the opportunities presented to them. This isn't true and is more a sign of how they've justified their own life choices.

> 3. Unless you are doing tons of system administration on your daily driver, most of your time is spent in applications, and the choice of operating system doesn't matter too much.

I, like many people, tend to use more than one application, so I find the OS does matter. On Windows 10, when I hit the start bar, and start typing the name of an application I want to launch, I really do care when the bar sits there loading up, scraping internet data to show me its suggested apps and adverts.


I don't necessarily agree with your first point.

When I (and many others) first started out using Linux, this was when the most trouble was likely to crop up. Over time, one learns and adapts to certain intricacies, hardware or methodologies used, and hopefully good practises, such as avoiding utterly rubbish or quaint distributions. Of course no scenario is bound to be 100% trouble free, but this is a far cry from an inexperienced user.

Even if I were to accept the premise that the other users are simply "doing nothing interesting", what would your idea of interesting be? Is it heavily exotic in nature (which I do agree in this case), or things that fall outside the purview of web browsing, document editing and leisurely activities? These things can also cause trouble outside the fault of a user for any reason, but this doesn't necessarily mean that they are inexperienced or do nothing interesting.

If you can elaborate more, I'd be interested to gauge if I agree in a new context.


> 1. If you don't encounter any trouble daily driving GNU/Linux, that is actually a sign of inexperience. That or you're doing nothing interesting.

There are 3 type of person who knows a lot about cars:

1. A Car mechanic 2. People who love cars, and tinker them constantly 3. People who have a shitty car and something always breaks.

Personally I'm not a car guy, I treat them as tools. I can do this because I always had a reliable car. I have a friend who once mocked me for my inexperience. I had to reminding him why he knew so much about cars. He was a Type 3 guy. No a shitty car tho, but something always broke on it anyway.

So getting back to you, can we just have something Works? Is that a high bar?


My point here is that if you use Linux enough, then you will encounter issues, period, regardless of your technical level.


And if you use Windows enough, particlarly 10 or 11, you will encounter issues as well.


Just less frequently. Also, we should agree on what those "issues" are, really. Beside the lack of popular commercial applications on Linux, the existing, open source variants, are often inferior if not frustrating.


We disagree strongly then.


The same thing is true if you use a pencil enough. The question is whether you run into issues significantly more often than the commercial alternatives.


You conclusion for your second point seems to include a lot of unstated assumptions.

Which principles do you think are at play?


> If principles mattered more than convenience

What principles? Is this another BSD vs GPL licensing turf war you're trying to start here? I think there are obviously coherent sets of principles on which it makes sense to prefer the latter.


> If principles mattered more than convenience, most Linux users would be on FreeBSD or OpenBSD instead of GNU/Linux. Either follow your arguments to their conclusions, or understand that Windows/macOS users are doing the same thing as you—making practical tradeoffs.

I'm chiming in with everybody else here, but why do you think that my principles are better served by licensing that is less supportive of my principles?


Point 2: What makes you say that, and what principles specifically are you assuming every gnu/linux user has that would lead them to Free/OpenBSD (and not dragonfly or net)?


1) Yes for quite some time I only do fairly basic/uninteresting things. That being said, gnu/linux has become far easier to use than it was, say 20 years ago.


This is an important list.


I didn't knew PCIe hotplug works at all!


You'd be surprised. Here's an article I wrote on the modernization of PCIe hotplug in Linux:

https://lwn.net/Articles/767885/


Not entirely sure we are going the right way tho. In the last 3 months or so (aka 2 major kernel versions), I had to add stuff to my command line to make my perfectly working workstation work again. X99 motherboard, worked for years.

GRUB_CMDLINE_LINUX_DEFAULT="mitigations=off acpi_enforce_resources=lax pci=noaer intel_iommu=off"

All of them due to various problems I had to 'google and cut paste bits and see if it works'. I'm considered a bit of an expert, and I only understand fully one of the added parameter, and why it's needed.


I must admit, I love 'Linux evenings'. I learn so so much in them.


I don't see the drive connection type is as important as making sure your backup data stays encrypted.

Now doing this across different operating systems is clunky and quite challenging.

You could use something like True Crypt or Vera Crypt, but mounting your drive then won't be typically as simple as plugging it in.

For that reason it's probably better to have a separate computer for backups connected to the network. That way you don't need to care about encryption on every system you connect it to. It just works (tm).


I mean it's obviously a kernel bug. Now I wonder if it's nice to have a way to work around them, or if these work arounds are detrimental to them ever getting fixed properly.


I had a couple of years stint with windows, and I had a couple of these "evenings" as well. Maybe Mac OS is immune, but windows never was and isn't now.


Occasional issues like these is why I continue to run Windows even though Microsoft has been pulling off all sorts of shenanigans with telemetry, broken file manager / start menu, ads etc.

I really want to move to Linux but unless my laptop manufacturer provides support, its never going to happen. I don't want to think my OS when buying a hardware accessary or a software product.

A laptop's a tool and Windows still provides the most support.


What kind of windows support do you use?


(Presuming there are some linux experts here) Slightly off-topic question but does anyone have any strong opinions about d-bus as an IPC mechanism? Is it actually used much or do most people prefer other lower level Linux/Posix IPC mechanisms (shm, pipes etc)? Does it have any glaring disadvantages compared to e.g. Android's Binder, or is that only more widely used because Google enforces it?


This occcurs every time I decided to dab into Linux. Usually I got the answer from a random search on stackoverflow and never managed to understand what happened.

But I think it's OK. I have always had an interest in system programming and that's why I wanted to dab into Linux. But system programming is not a shiny, whole smooth thing one may imagine, but is quite messy all the time. I simply got what I wanted.


I hate linux for this. How the hell a modern OS can be frozen completely shut by some runaway python script running in a VS code debugger, to a point that you have to kill your PC with a button. And then non-obvious issues that just "appear" out of nowhere and steal your perfectly fine evening where you have planned to work on your favorite side project.


> How the hell a modern OS can be frozen completely shut by some runaway python script running in a VS code

That isn't a just Linux thing. You can easily fork-bomb (or equivalent) Windows/iOS/other and so forth, especially (though not only) if running as a privileged user.

> And then non-obvious issues that just "appear" out of nowhere

That is definitely not something that I've never seen when using Windows! Where it has happened to me under Linux it has been eventually traced to a hardware issue or a bad update (the latter I've experienced on both those OSs over the years and the former will affect everything).


> > And then non-obvious issues that just "appear" out of nowhere

>That is definitely not something that I've never seen when using Windows!

Case in point: my laptop (running the Windows 10 it came with) recently stopped hibernating properly, well it hibernates OK but half the time when it wakes up the screen does not turn on, and this evening it has decided that it doesn't want to be asleep while plugged in. If I tell it to sleep (through power key or start menu) while plugged in it does very temporarily and then returns to the login screen. If I tell it to sleep when running on battery it does but immediately wakes up once power is connected.

While Linux distros are far from free from random problems at times, in my experience Windows exhibits them more and tends to be harder to diagnose because more is hidden.


> That isn't a just Linux thing. You can easily fork-bomb (or equivalent) Windows/iOS/other and so forth, especially (though not only) if running as a privileged user.

I've crashed DWM with WSL before, by doing something completely unrelated to Windows.

What happens is you start getting the solitaire effect because suddenly no windows have graphics buffers anymore


Linux will free memory that's backed by files, and load it again when it's needed. Under heavy memory pressure this has the same effect as swap thrashing, even if you have swap disabled. The kernel's out-of-memory killer does not help, because it's designed to work only as a last resort, and the system is still technically making progress despite being unusable in practice. AFAIK no other mainstream OS has this problem.

The solution is running a userspace out-of-memory killer, e.g. earlyoom.


This seems like a random, unrelated yet incredibly specific rant, and not really a unique property of Linux.


Thunderbolt isn't that well supported, and they decided to specifically not use USB3 which is well supported for bogus reasons. #1 rule with Linux is pick well supported hardware.


How is your comment related to the article? The only connection I can see is bad quality of some hardware and drivers.


It’s a reappearing issue with Linux that you run into problems other operating systems don’t have to this degree. Which is sad, because seeing the direction Windows 11 is heading towards, I’d like for Linux to achieve widespread adoption.


This is false.

My story, I bought a new widget, a gaming wheel a few years back, and it did not work on my Windows machine, I even reinstalled Windows, installed the drivers from disk it did not work. I send the thing back and report it broken and the guys call me to report that works perfectly. In the end I bought a different model that worked both on Windows and Linux.

So a gadget with official Windows support did not work on my machine, even after I wasted my time reinstalling Windows just in case but worked fine for some support people.

I had a diffrent experience with a USB WifFi antenna , it come with a mini CD with drivers but my PC had no CD Drive, there was no online source for that no name brnd.

I plugged it on a Linux machine and it worked, then I done a lsusb and found the real chip powering the device, then Googled for that I found a compatible Windows driver on a drivers website and I fixed it for Windows machine.

Linux has problems, but install a LTS distro on compatible hardware and you should not have issues for a few years.

To balance things, my latest bad experience in KDE was when Kwin crashed and refused to enable compositing again printing a vague reason in the logs, I found using Google that KWin put an isOPenGLSafe=false in soem log file and refused to start compositing but for some reason (maybe is a valid one) was incapable to print also that "I refuse to start compositing because of this setting - lesson for us devs, when we put stuff in log messages, put as much info as possible


Oh don’t worry as a daily user of windows 11 I can assure you that Microsoft is doubling down on random mystifying problems, paired with no useful forum for users to collectively figure things out.


Not as fast as they are breaking links to previoulsy-proven solutions on their own forums.

And following up by silently escalating recently-solved problems beyond the reach of posted workarounds.

Just in case the doubling down does not proceed as expected.


> It’s a reappearing issue with Linux that you run into problems other operating systems don’t have to this degree.

I've supported both professionally, and no, you don't. I blame the claim on a lot of people using commercial operating systems feeling a bit guilty, like they're supposed to be on Linux to be a real developer. They respond by criticizing an OS that they don't really use for the problems that they imagine happen all the time. Instead, what's happening is that they, personally, always run into problems trying to figure out why something won't work in prod, or trying to get their VMs to work like the tutorial, and they assume that people who use Linux as a daily driver run into problems at that rate. They never pay attention to Linux unless something critical has already broken.

Also, whenever they decide that they're going to try Linux again to see if it's ready for them yet, they always choose Arch (it used to be Gentoo) or the latest trendy distro that has a MacOS aping desktop. Just install Debian, it's easy.


If you want that, then use it and contribute. That's the only way widespread adoption will happen.

Contribution does not necessarily mean code. It can be documentation, design, UX, sharing information, or just working getting people familiar with Linux.


That’s where Linux burned all the bridges on being my main machine.

Nowadays, I cannot afford to take a day off to fix obscure incompatibilities like this.

The last straw to me was when I stored my closed lid xps in a bag and 3 hours later everything was smelling like burning plastic because the computer suspend suddenly stoped working. To get things worst, it happened during a long haul flight.


Suspend on my MacBook Pro also didn't work for >1 year, but seems to be magically fixed now. All OSes have annoying bugs in my experience.


I seem to recall Dell saying that you should never store a suspended laptop in a laptop bag, because there are situations where it might wake up and overheat.



> I seem to recall Dell saying that you should never store a suspended laptop in a laptop bag, because there are situations where it might wake up and overheat.

Which is the stupidest thing ever. I want to see Apple use this to shame Dell in ad.


Windows + WSL2. After 20 years on Fedora and Ubuntu.

Its that good.


I recently had a Linux evening trying to enable a swap file for pop os that was inexplicably lacking one by default leading to hard locks when I ran out of ram. I eventually fucked the install badly enough that I needed to wipe the boot drive and start fresh. Really disappointing because until then everything was Just Working.


This is exactly why I stopped using Linux as my personal computer many, many years ago. I could do it, yes, but just got tired of spending hours fixing random weirdness. I have work to do! Love, love love it on servers, and have many. But I use macOS on my Mac and everything runs smoothly for me…


The best linux evenings are the ones which happen and the StackExchange network is down for maintenance. Got burned once screwing up GNOME on my Ubuntu install, and what do you know, Ask Ubuntu wasn't available.


Instead of searching the Web for hours, I think it's more effective to request help in a mailing list or their IRC channel if a few minutes of web search doesn't get you anywhere.


I realized real competitor of MacOSX is not Windows, it's Linux.


Just googling "No bus number available for hot-added bridge" gives hundreds of results


> I spent several hours fixing a problem and I learned next to nothing in the process.

This seems like the wrong attitude to me. It's the perfect opportunity to dig into what those kernel params do! Understand _why_ that fixed the problem.


I am getting a 500 status code.


Buckle up everyone I hear 2023 is the year of the Linux desktop!


I said linux was fragile the other day and someone replied "in what way", this post is a good answer. In the past also I had similar critiques to my own criticism of Linux.

Who knows how many total days and weeks I spent on random things like this and all the obstacles I had to overcome. Just to get secure boot working on debian, I had to overcome their block of my vpn IP on their wiki, and the sign-tool which is impossible to find and after a lot of googling found an ubuntu post suggesting a weird kernel version speicific path that worked but the builtin vbox kernel module signing thing can't find it during apt update so at each kernel update I have to manually sign vbox modules (which means making the privkey readily available and defeating the whole point but whatever), at least I made a small script that only needs my secureboot pin/pass now. And then I decided to use apparmor, which is nicer than selinux but it took weeks to lockdown two apps and have them work and I am still not confident I did it right.

Ok,ok... let me pause there because I could go on. In my current day job, I attribute my success to two things: 1) Using linux for many years and fighting these "stupid" battles 2) I learned C which made learning a lot of other stuff a lot easier. I work in security now, so when there is a linux or container/k8s related incident it is easier for me than for my colleagues. I also don't shy away from difficult technically complex incidents or problems because I am battle-hardened in the art of figuring out seemingly random problems in topics I know nothing about with the sheer power of search engines, docs and random internet posts.

When I worked my first k8s incident, I knew nothing about it. I was watching talks by google people on one window while reading docs on another and poking around in the gke node scratching my head on why the hell a k8s node is running chromeos and everything is readonly. But all of that is a piece of cake. Try random browser crashes because weird grsec/pax patches (before they closed it off) that were messing with jit or untangling a dependency mess on a gentoo install neglected for years which doesn't even have a supported python version now. The brutality and fragility of Linux has given me a lot of mental muscles that has helped me immensly and it has forced me to be distrustful of software in general and learn a lot of stuff from programming languages to permission models, os architecture and gnu tools like sed,grep,awk and the like.

But yeah, I love linux so much but let's not pretend it is easy to use or user friendly. But hey, 2023 will be the year of the Linux desktop so we'll see.

A lot if *nix philosophy and mentality like "RTFM" and defaults not being a big deal I think are from an era where the ecosystem was much less complex and the userbase was much smaller. I attribute many of these pain points to that undying mindset.


Windows and Mac have their fragile points as well.

For starters, Windows is Ubuntu bug #1. (I kid).

The nice thing about Linux is it can evolve without being pulled into advertising-and-data-monetization as a business plan. In other words, society is better off with it.


Are you still using Linux as daily driver? I'm just curious.


xkcd: Wisdom of the Ancients https://xkcd.com/979/


it is strange not to know to run dmesg first-thing when you have a HW problem for a guy who rants about "I spent several hours fixing a problem and I learned next to nothing in the process"

if you wanna learn, you should've known at least about dmesg


> if you wanna learn, you should've known

This is a catch-22




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: