> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.
Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem
Everything belongs in version control imho. You should be able to clone the repo, yank the network cable, and build.
I suppose a URL with checksum is kinda sorta equivalent. But the article adds a bunch of new layers and complexity to avoid “downloading Cuda for the 4th time this week”. A whole lot of problems don’t exist if they binary blobs exist directly in the monorepo and local blob store.
It’s hard to describe the magic of a version control system that actually controls the version of all your dependencies.
Webdev is notorious for old projects being hard to compile. It should be trivial to build and run a 10+ year old project.
If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.
WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.
That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.
Making heavy use of mostly remote caches and execution was one of the original design goals of Blaze (Google's internal version) iirc in an effort to reduce build time first and foremost. So kind of the opposite of what you're suggesting. That said, fully air-gapped builds can still be achieved if you just host all those cache blobs locally.
> So kind of the opposite of what you're suggesting.
I don’t think they’re opposites. It seems orthogonal to me.
If you have a bunch of remote execution workers then ideally they sit idle on a full (shallow) clone of the repo. There should be no reason to reset between jobs. And definitely no reason to constantly refetch content.
Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem