This is certainly a canonical way of doing things. Podman allows you to treat a containerised process much more like any other process. In this world systemd manages the interdependencies (including on things like the network).
You may think that other workflows are better, but there is a chunk of the engineering population that is happy with the tools they have for managing a set of bare processes on a machine.
The key difference is that systemd is system-wide orchestration, and docker-compose controls a specific set of containers connected in a specific day, and likely otherwise isolated from the system. The latter allows to "ship" entire configurations that work together, as one unit.
I think that existing tooling on nix environments can work much better than docker compose to aggregate a set of containers and resources.
Setting resource limits using cgroups, isolating users/paths/networks using the capabilities of systemd and all together used from a common shell script seems more expressive and more versatile than using docker/docker compose and requiring an extra daemon to deal with everything.
The thing with docker-compose.yml is that it is well known, well documented and relatively easy to learn and get running even for more complex configurations.
I would not want to go through that common shell script for every new project because it would probably look very different from project to project.
You may think that other workflows are better, but there is a chunk of the engineering population that is happy with the tools they have for managing a set of bare processes on a machine.