I got somewhat excited about podman a couple months ago, and then learned that there are painful shenanigans between required versions and the (ancient?) one that is available in the Ubuntu 22.10 package manager [1].
It seemed great until I tried to launch an existing Postgres container that required a more recent Podman version.
The Podman installation page [2] makes some effort to explain alternate installation steps, but doesn't make it clear why, or how far back you'll be.
Not complaining here, just giving folks a heads up.
Despite these issues, it does look like a great project.
It’s very nice to have the option for auto update, but in reality would anyone wants their containers to auto deploy newer versions? Aside from the security exposure (specially if you were on a Scandinavian’s holiday) there is the risk of backward incompatibility such as changes around config, db migration, cli arts or anything else the container might consume but sits outside the container!
Using it with `:latest` is going to have the problem you described and is not the expected usage. It's more feasible to use it with a release line tag, like `postgres:15`, where you expect updated images to be backward-compatible.
Alternatively, one place where `:latest` would make sense is if they're your images, and you're relying on podman-auto-update to update a fleet of servers to start running the latest version of a service as soon as you push the new image.
How do people automate this kind of thing, like rolling out a specific image via CI?
Generating the systemd files, copying via scp and systemd reload via scripted ssh?
For my home servers which just run personal things (like a kanban board as a todo list) I just use watchtower[0]. This requires mounting the docker socket into this container, which is not ideal.
In a production environment, id expect pinning of the docker sha and setting docker tags as immutable. Some software projects exist to scan for updates and draft PRs automatically for changes (I can't remember the name of the software but it begins with R).
I use it for my home server and I love it because it takes care of Dockerfiles too and version changes are saved in git, which means that a rollback is just a matter of switching back to a previous commit and rebuilding your containers (in addition to restoring a backup of your Docker volumes).
I guess in "Make it as simple as possible, but not simpler", scp is too simple for you? But you also mentioned five different beasts to master, so I'm not so sure ;)
If all you need is to update some files, with minimal error handling, scp is fine! (Well, rsync is probably a better option now, but scp would still work.)
As you get progressively bigger, you can consider other options.
It varies a lot on what you're using. Postgres in particular is a bugbear because updating dockerized Postgres is hell at best and requires several backups in case you screw up.
OTOH a bog standard python server backend can probably keep running for quite a while on alpine:latest without issue because it's not very reliant on much in terms of strange package availability (maybe old python versions that get deprecated every once in a while).
That is a problem if you do not have strong controls on the publishing side. Ultimately the point of cicd is that changes get deployed mostly automatically, it doesn't matter if its accomplished by servers using :latest tag lr some automation going and reconfiguring servers or something else
Nice post, but I really dislike the idea of creating unit files for each container.
docker compose was a step forward, this is certainly going back and I will probably never use it no matter how many times someone says that this is the canonical way to do things, because it isn't, no one does that.
It seems almost as if someone is deliberately crippling open source container tooling just to increase sales of cloud provider container management tools. It most likely isn't true but I still wonder about it because I haven't seen actually good contributions and innovation in this area for some years now.
This is certainly a canonical way of doing things. Podman allows you to treat a containerised process much more like any other process. In this world systemd manages the interdependencies (including on things like the network).
You may think that other workflows are better, but there is a chunk of the engineering population that is happy with the tools they have for managing a set of bare processes on a machine.
The key difference is that systemd is system-wide orchestration, and docker-compose controls a specific set of containers connected in a specific day, and likely otherwise isolated from the system. The latter allows to "ship" entire configurations that work together, as one unit.
I think that existing tooling on nix environments can work much better than docker compose to aggregate a set of containers and resources.
Setting resource limits using cgroups, isolating users/paths/networks using the capabilities of systemd and all together used from a common shell script seems more expressive and more versatile than using docker/docker compose and requiring an extra daemon to deal with everything.
The thing with docker-compose.yml is that it is well known, well documented and relatively easy to learn and get running even for more complex configurations.
I would not want to go through that common shell script for every new project because it would probably look very different from project to project.
Seems like you are supposed to create service for each container and then somehow else manage service dependencies, network and other aspects which are trivial when using docker compose. Instead of having single simple docker-compose.yml file which you can add to the git repo, you end up having multiple services, network definitions, firewall rules. You then have to symlink those services to the right place, enable them and do god knows what else.
I am not scared of systemd and I use it extensively but this does not sound like a convenient way to manage multiple containers.
Well I use ansible's podman module to setup/configure my containers and then use it to generate systemd from that.
Before that I used Makefiles.
Youre supposted to have podman generate systemd for you. With the recent update, your systemd unit will have a [Container] section which will be easy to configure manually if you want.
Podman’s unofficial answer to compose is “podman compose”. The official answer is kubernetes.
Podman supports generating systemd units for “Pods” (groups of containers).
The unit files podman generates are portable. Rather than version controlling your compose.yml, you would version control your unit defining your pod.
Using a makefile you could handle installing/updating/etc your unit file.
It is an equally convenient way to manage your containers, but it does require learning and understanding how everything fits together, compose is nice because it abstracts a lot.
`Requires=`? Systemd unit files are configuration files, not object code. While podman(1) can generate them for you, that doesn't mean that you should be treating the results as opaque artifacts. They're more like a project skeleton. You're meant to commit those, and hand-maintain them; and you're meant to read them in order to understand "how things currently are." Unit files are to systemd as YAML resource manifest files are to Kubernetes: the canonical description of the resource, that you can "do GitOps to."
> You then have to symlink those services to the right place, enable them
Those are the same thing. `systemctl enable` creates symlinks "to the right place". Also, it's not necessary to `systemctl enable` a service in order to merely `systemctl start` it; `systemctl enable`-ing a service specifically binds it to start automatically on startup (or on whatever other .target units it has specified), by putting an entry for that service in a directory named after that target. Just like how SysV runlevels work with rcN.d directories, but not limited to the system being at exactly one runlevel at a time.
> Instead of having single simple docker-compose.yml file which you can add to the git repo
Why wouldn't you still have a docker-compose.yml file in the repo? I think you're misunderstanding the whole point here.
Docker-compose files are a development and testing technology. You run them to stand up an ephemeral deployment of a complex set of components, on your own workstation or maybe in CI, so you (or the E2E test harness) can poke at it. Docker-compose files are maintained by developers, to serve the needs of developers; and never reach the hands of ops staff, let alone customers.
Systemd unit files are a production deployment technology. They're something you create to include in a Deb or RPM package. These are maintained by release engineers, with bug reports on how they work submitted by your own ops staff, or even by the ops staff of customers who have either on-prem deployments of your software, or who have shell access to hosted dedicated enterprise deployments of your software. (Yes, that's a thing that happens.)
Importantly, systemd unit files are often created by someone external to the project, who was handed an opaque Docker image (or other kind of opaque binary), and must now make that software run robustly on their system, without being able to change it. Systemd is full of ways to accomplish that. Docker-compose, meanwhile, assumes that those sorts of things — e.g. tweaking sysctls, capturing PID after pre-forking, touching log files and setting their ownership at start, running as root for a bootstrap phase and then as non-root for steady-state, etc — are the domain of the contained software, and if the developer doesn't like them, they should just change their software. Which, if you're some SRE deploying `foocorp/foo-core:latest`, you really can't.
There are concerns that are only relevant to production. And there are concerns that are only relevant to development. Usually these concerns form two pretty orthogonal classes, where it doesn't even make sense to have a single file format that can express both. In fact, often the only concern shared by the two, is the set of image names and tags referenced by the files. Everything else is environment-type specific. So it makes a lot more sense to have two entirely distinct sets of config files — one to drive development tooling, and one to drive production tooling — than to try to bring them together. This means you need one more line in your bump-release-version script to write the new image tag to one more file. Other than that, you'll likely never find yourself updating both sets of files for a single concern. (At least, I personally never do.)
user space container unit files kind of stuck to use and edit, I have to have notes to always add the socket too else I can't see what's running. just use k8s it's easier in the long run
So one thing that I didn't read here is how it works with environment variables.
Imagine you launch a container with FOOD=fries and DRINK=beer. The container doesn't set defaults for those variables, it does have one for DESSERT=ice-cream.
The container runs with FOOD=fries, DRINK=beer & DESSERT=ice-cream. An update comes along, and the container now has a default for DRINK=wine and switches to DESSERT=creme-brulee.
You update the container. What do you expect to happen with DRINK & DESSERT? DRINK remains beer, fine we chose that. But DESSERT also remains ice-cream, even though we didn't explicitly say that. The problem is that Docker (well, the surrounding tooling) cannot distinguish between your input and the container's default. They all get set in the `env` section of the container.
So you update the container and end up with FOOD=fries, DRINK=beer & DESSERT=ice-cream.
I get around this with using ansible with the Docker collection from the community [0]. This has a separate input and when it recreates the container it only takes into account the environment variables I set, and doesn't provide the other ones, so they become the container's default.
Many containers have things like PYTHON_VERSION=3.11.1 PYTHON_PIP_VERSION=22.3.1 PYTHON_SETUPTOOLS_VERSION=65.5.1 PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/66030fa03382b4914d4c4d08... PYTHON_GET_PIP_SHA256=1e501cf004eac1b7eb1f97266d28f995ae835d30250bec7f8850562703067dc6 in there.
The container service unit will do `podman run -e FOOD=dries -e DRINK=beer whatever-image:latest`
podman-auto-update.service will pull the new `whatever-image:latest` image, restart the container service (which `stop`s and `rm -f`s the container), and `image prune -f` to remove the previous `whatever-image:latest`.
The restarted container service will continue to run `podman run -e FOOD=dries -e DRINK=beer whatever-image:latest`.
So you end up with `FOOD=fries DRINK=beer DESSERT=creme-brulee`
It seemed great until I tried to launch an existing Postgres container that required a more recent Podman version.
The Podman installation page [2] makes some effort to explain alternate installation steps, but doesn't make it clear why, or how far back you'll be.
Not complaining here, just giving folks a heads up.
Despite these issues, it does look like a great project.
[1] https://github.com/containers/podman/issues/14065
[2] https://podman.io/getting-started/installation