I love Kagi. I understand the niche this fills. I even understand not open sourcing it yet.
But what I really miss is a self-hosted sync server. I don't want to use a browser without sync, but I also don't want to trust this data with any 3rd party other than myself.
It's one of the main reasons I'm using Firefox, since that is the the only browser that even vaguely supports this - albeit not well.
I find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
The biggest bottleneck is I/O performance, since I rely on SAS drives (since running full VMs has a lot of disk overhead), rather than SSDs, but I cannot justify the expense to upgrade to SSDs, not to mention NVME.
> Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
That is a core part of the hobby. You do some things very enterprise-y and over-engineered (such as redundant PSUs and UPSs), while simultaneously using old hard drives that rely on your SMART monitor and pure chance to work (to pick 2 random examples).
I also re-use old hardware that piles up around the house constantly, such as the Pi. I commented elsewhere that I just slapped an old gaming PC into a 4U case since I want to play/tinker with/learn from GPU passthrough. I would not do this for a business, but I'm happy to spend $200 for a case and rails and stomach an additional ~60W idle power draw to do such. I don't even know what exactly I'll be running on it yet. But I _do_ know that I know embarrassingly little about GPUs, X11, VNC, ... actually work and that I have an unused GTX 1080.
Some of this is simply a build-vs-buy thing (where I get actual usage out of it and have something subjectively better than an off the shelf product), others is pure tinkering. Hacking, if you will. I know a website that usually likes stuff like that.
> You're not going to learn much about k8s from that
It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
> It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
Building k8s from scratch, you're going to learn how to build k8s from scratch. Not how to operate and/or use k8s. Maybe you will learn some configuration management tool along the way unless your plan is to just copy-paste commands from some website into terminal.
> find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
Yeah, if you run a VM for every thing that should be a systemd service, it scales well that way.
"should be" according to your goals. "should not be" according to mine:
1. run untrusted code in a reasonably secure way. i don't care how many github stars it has, i'm not rawdogging it. nor is my threat model mossad, so it doesn't have to be perfect. but systemd's default security posture is weak, hardening it is highly manual (vs. "run a VM"), and must be done per service to do properly (allowlist syscalls, caps, directory accesses, etc.).
2. minimize cost. it's orders of magnitude less costly for most people with the skills to run a homelab to:
a) spend 50% more for compute & storage than a single hour of handwriting & tuning reasonably secure systemd services, wiring up dependencies, etc.
b) build a backup and migration strategy once and reuse it for everything. this is technically possible with systemd, too, of course, but way more costly to setup.
c) one single universal solution. practically everything will run in a VM. this is not true for systemd, esp. isolated systemd services.
if you want to optimize for "learn how to configure systemd", "learn how to hyperoptimize cpu usage", or whatever it may be then great. if other people aren't, they're not necessarily wrong, they may be choosing different tradeoffs. understanding this is an essential step in maturing as an engineer and human being. i truly mean this as encouragement - not rebuke. Otherwise i wouldn't have paid the relatively high cost in time to write it afterall :)
OP is right (not surprising, given he's a platform engineer with decades of experience unlike you, a nobody). You are projecting your extremely flawed and naive approach of learning k8s. If all you could think of is "copy-paste configs" then sure, you are definitely NOT gonna learn anything. You're just trying to act smart with absolute hatred towards people running k8s outside of businesses
That's good to know, thank you. It's using the official charger in the rack, but I used the charger I've had on my desk while setting it up. I added a note to the article.
I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).
I know others really enjoy playing with K8s, which is its own rabbit hole.
My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.
I just commented on this above, but I actually got for the Pi for free and it's a very capable device. I wouldn't buy one for this use case (nor do I really recommend it, but it _does_ work).
You'd be delighted (or terrified) to know that I just added an old gaming computer in a 4U case to the cluster, so I can play with PCI/GPU passthrough.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
Proxmox is essentially a clustered hypervisor (a KVM wrapper, really). It has some overlap with K8s (via containers), but is simpler for what I do and has some very nice features, namely around backups, redundancy/HA, hardware passthrough, and the fact that it has a usable GUI.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
I usually agree (and enjoy reading angry threads years later), but wasting screen real estate and getting measurably worse in terms of accessibility is simply not a good design decision.
I adore QGIS. I just built a map (and corresponding GeoPDF file for offline use) for two local wildlife management areas last weekend (and subsequently used them while on them).
reply