Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like you are supposed to create service for each container and then somehow else manage service dependencies, network and other aspects which are trivial when using docker compose. Instead of having single simple docker-compose.yml file which you can add to the git repo, you end up having multiple services, network definitions, firewall rules. You then have to symlink those services to the right place, enable them and do god knows what else.

I am not scared of systemd and I use it extensively but this does not sound like a convenient way to manage multiple containers.



Well I use ansible's podman module to setup/configure my containers and then use it to generate systemd from that.

Before that I used Makefiles.

Youre supposted to have podman generate systemd for you. With the recent update, your systemd unit will have a [Container] section which will be easy to configure manually if you want.

https://github.com/containers/podman/blob/db505ed5dce7a868b8...


Podman’s unofficial answer to compose is “podman compose”. The official answer is kubernetes.

Podman supports generating systemd units for “Pods” (groups of containers).

The unit files podman generates are portable. Rather than version controlling your compose.yml, you would version control your unit defining your pod.

Using a makefile you could handle installing/updating/etc your unit file.

It is an equally convenient way to manage your containers, but it does require learning and understanding how everything fits together, compose is nice because it abstracts a lot.


> somehow else manage service dependencies

`Requires=`? Systemd unit files are configuration files, not object code. While podman(1) can generate them for you, that doesn't mean that you should be treating the results as opaque artifacts. They're more like a project skeleton. You're meant to commit those, and hand-maintain them; and you're meant to read them in order to understand "how things currently are." Unit files are to systemd as YAML resource manifest files are to Kubernetes: the canonical description of the resource, that you can "do GitOps to."

> You then have to symlink those services to the right place, enable them

Those are the same thing. `systemctl enable` creates symlinks "to the right place". Also, it's not necessary to `systemctl enable` a service in order to merely `systemctl start` it; `systemctl enable`-ing a service specifically binds it to start automatically on startup (or on whatever other .target units it has specified), by putting an entry for that service in a directory named after that target. Just like how SysV runlevels work with rcN.d directories, but not limited to the system being at exactly one runlevel at a time.

> Instead of having single simple docker-compose.yml file which you can add to the git repo

Why wouldn't you still have a docker-compose.yml file in the repo? I think you're misunderstanding the whole point here.

Docker-compose files are a development and testing technology. You run them to stand up an ephemeral deployment of a complex set of components, on your own workstation or maybe in CI, so you (or the E2E test harness) can poke at it. Docker-compose files are maintained by developers, to serve the needs of developers; and never reach the hands of ops staff, let alone customers.

Systemd unit files are a production deployment technology. They're something you create to include in a Deb or RPM package. These are maintained by release engineers, with bug reports on how they work submitted by your own ops staff, or even by the ops staff of customers who have either on-prem deployments of your software, or who have shell access to hosted dedicated enterprise deployments of your software. (Yes, that's a thing that happens.)

Importantly, systemd unit files are often created by someone external to the project, who was handed an opaque Docker image (or other kind of opaque binary), and must now make that software run robustly on their system, without being able to change it. Systemd is full of ways to accomplish that. Docker-compose, meanwhile, assumes that those sorts of things — e.g. tweaking sysctls, capturing PID after pre-forking, touching log files and setting their ownership at start, running as root for a bootstrap phase and then as non-root for steady-state, etc — are the domain of the contained software, and if the developer doesn't like them, they should just change their software. Which, if you're some SRE deploying `foocorp/foo-core:latest`, you really can't.

There are concerns that are only relevant to production. And there are concerns that are only relevant to development. Usually these concerns form two pretty orthogonal classes, where it doesn't even make sense to have a single file format that can express both. In fact, often the only concern shared by the two, is the set of image names and tags referenced by the files. Everything else is environment-type specific. So it makes a lot more sense to have two entirely distinct sets of config files — one to drive development tooling, and one to drive production tooling — than to try to bring them together. This means you need one more line in your bump-release-version script to write the new image tag to one more file. Other than that, you'll likely never find yourself updating both sets of files for a single concern. (At least, I personally never do.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: