I think this is a mostly fair criticism of nixos. Nixos has a lot of powerful tools, but if you don't need them, they can get in the way. Some assorted notes:
> the constant cycle of rebuild → fix → rebuild → fix → rebuild
I've found this useful to eliminate the rebuild loop:
https://kokada.dev/blog/quick-bits-realise-nix-symlinks/
It lets you make the config of the program you choose a regular mutable file instead of a symlink so you can quickly iterate and test changes.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
If a binary exists. A lot of AUR packages I used to rely on didn't have a binary package (or the binary package was out of date) and would have to build from source. On nixos my machines are set up to use distributed builds (https://wiki.nixos.org/wiki/Distributed_build). Packages that do need built from source get built on my server downstairs. The server also runs a reverse proxy cache so I only need to download packages once per version.
Distributed AUR builds are possible on arch, but they require a lot of setup and are still fragile like regular AUR builds, your only choice of dependencies are what's currently available in the repos.
> On my machine, regular maintenance updates without proper caching easily take 4–5+ hours
It sounds like the author may be running the unstable release channel and/or using some heavy unstable packages. Which might explain a lot of other problems the author is having too.
Back when I used arch, I found that as time went on, my system would sort of accumulate packages. I would install $package, then in the next version of $package a dependency would be added on $dep. When I updated, $dep would be installed, then eventually $package would drop the dependency on $dep, but $dep would remain installed. I would periodically have to run pacman -R $(pacman -Qtqd | tr '\n' ' ') to clear out packages that were no longer required.
In using your server as a proxy cache, do you just include the server as a nix cache substitutor, or simply use a MITM approach using something like squid proxy?
If the former, via substitutor (or if also using a remote builder), how do you manage when moving portable clients outside your LAN? E.g. traveling with your laptop? Do you tunnel back home, have a toggle to change substitutor priorities?
I find it the default timeout for unresponsive substituters excessively long, as well as the repeated retries for each requested derivation annoying, rather than it recalling and skipping unresponsive substituters for subsequent derivations in the same nix switch/build invocation.
Can you expand on how you use Zellij? I tried it and I understand you can use it for splits, and tabs similar to tmux. But I might revisit it if it allows an IDE like workflow with Helix.
I think it's because there are no grills on the outside. If the fans were sucking air out of the box, dust would build up on the outside, and bumping it would dislodge dust back into the environment.
With the fans blowing in, all the dust is on the inside of the box (and on the fans).
Something to note: Certain service providers (e.g. Twitch) will not allow you to sign up using an '@mailbox.org' email address. I do not know if this ban extends to custom domain addresses.
SQLx and F# type-providers are probably the best developer experience for writing database access code. I wish more languages had something equivalent.
I think this sort of stuff only comes after a LOT of experience with building SQL db backed systems - it resonated with me immediately. (I'm the OP but not affiliated with this Rust project at all).
> unless you run nix-collect-garbage periodically
> the constant cycle of rebuild → fix → rebuild → fix → rebuildI've found this useful to eliminate the rebuild loop: https://kokada.dev/blog/quick-bits-realise-nix-symlinks/ It lets you make the config of the program you choose a regular mutable file instead of a symlink so you can quickly iterate and test changes.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
If a binary exists. A lot of AUR packages I used to rely on didn't have a binary package (or the binary package was out of date) and would have to build from source. On nixos my machines are set up to use distributed builds (https://wiki.nixos.org/wiki/Distributed_build). Packages that do need built from source get built on my server downstairs. The server also runs a reverse proxy cache so I only need to download packages once per version.
Distributed AUR builds are possible on arch, but they require a lot of setup and are still fragile like regular AUR builds, your only choice of dependencies are what's currently available in the repos.
> On my machine, regular maintenance updates without proper caching easily take 4–5+ hours
It sounds like the author may be running the unstable release channel and/or using some heavy unstable packages. Which might explain a lot of other problems the author is having too.
Back when I used arch, I found that as time went on, my system would sort of accumulate packages. I would install $package, then in the next version of $package a dependency would be added on $dep. When I updated, $dep would be installed, then eventually $package would drop the dependency on $dep, but $dep would remain installed. I would periodically have to run pacman -R $(pacman -Qtqd | tr '\n' ' ') to clear out packages that were no longer required.
reply