That seems correct, but it also doesn’t account for managed languages with runtimes like JavaScript or Java or .NET, which probably have a lot of interesting runtime info they could use to influence caching behavior. There’s an amount of “who caches the cacher” if you go down this path (who manages cache lines for the V8 native code that is in turn managing cache lines for jitted JavaScript code), but it still seems like there is opportunity there?
Sure, but this is based on a fundamental trust in governments ability to spend money effectively. The ineffective spending has been in the news way more than the effective spending, so some people take this to mean all of the spending is ineffective.
I don’t know how to square this skepticism of government against very vocal “patriotism” coming from the trump camp, but humans can contain multitudes, I guess?
It's a simple question of economics and observation.
In a free marketplace, when a product, service or company is no longer useful...it dies. This creates a natural incentive to constantly improve, operate more efficiently or expand into new areas where it can create value.
With government spending, this doesn't happen because there's no incentive for it to happen. Programs are created and then they grow, perpetually, forever.
My goodness, I still remember Bill Clinton proudly showing a balanced budget. I remember George Bush Jr running with one of his biggest campaign points around fixing Social Security.
How we got from that era of energy for fiscal responsibility to $39 trillion in debt is...maddening.
In C#, you would normally implement rules like this with a custom Roslyn Analyzer or with https://github.com/dotnet/roslyn/blob/main/src/RoslynAnalyze.... It’s fair to say C# projects tend to have smaller denylists than mature C++ projects, but banned APIs definitely exist in mature C# projects.
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
Cloudflare Workers often rolls out V8 security patches to production before Chrome itself does. That's different from beta vs. stable channel. When there is a security patch, generally all branches receive the patch at about the same time.
As for beta vs. stable, Cloudflare Workers is generally somewhere in between. Every 6 weeks, Chrome and V8's dev branch is promoted to beta, beta branch to stable, and stable becomes obsolete. Somewhere during the six weeks between verisons, Cloudflare Workers moves from stable to beta. This has to happen before the stable version becomes obsolete, otherwise Workers would stop receiving security updates. Generally there is some work involved in doing the upgrade, so it's not good to leave it to the last moment. Typically Workers will update from stable to beta somewhere mid-to-late in the cycle, and then that beta version subsequently becomes stable shortly thereafter.
Thanks for the clarification on CF's V8 patching strategy, that 24h turnaround is impressive and exactly why I point people to Cloudflare when they need production-grade multi-tenant security.
OpenWorkers is really aimed at a different use case: running your own code on your own infra, where the threat model is simpler. Think internal tools, compliance-constrained environments, or developers who just want the Workers DX without the vendor dependency.
Appreciate the work you and the team have done on Workers, it's been the inspiration for this project for years.
If people aren't getting their work done, then they should be having discussions with their manager that eventually lead to pip or firing if not resolved. If they are getting their work done... Who cares if I do a "non work thing" at a "work time"?
In an agile world with an infinite backlog there's no such thing as being done with work. If you could be working on more work things during work time, they probably want that. Maybe you don't like that but c'mon now. It's clearly what they're after.
Then maybe it doesn't need to be done on a strict work/non-work schedule everyday? If one is an hourly employee, then sure, they should be doing work things when on the clock... but if they are salaried, part of that is not having to clock in and out to switch between work and non-work tasks, and not being a strict work/non-work schedule.
And also, a decent chunk of alcohol consumption must be solo? I'd bet alcohol is broadly more social, but I would also wonder if that would change if more public gathering places served weed in some form.
There is a ton of tools and custom logic used by/with/for the GN ecosystem in chromium that I imagine would be difficult to port.
This tool is substantially less complex than Bazel, nor is it a reimplementation of Bazel. Ninja's whole goal in life is to be a very fast local executor of the command DAG described by a ninja file, and siso's only goal is to be a remote executor of that DAG.
This is overall less complex than their first stabs at remote execution, which involved standing up a proxy server locally and wrapping all ninja commands in a "run a command locally which forwards it to the proxy server which forwards it to the remote backend" script.
A handful of other areas are configured using Starlark in chromium. This particular use is in a very different capacity than Bazel - the Bazel equivalent in chromium is GN, and I have not seen any signs that GN will be replaced any time soon.
GN at least used to generate Ninja files. So I suppose now it will be generating Siso files?
edit: asked and answered, Siso is a "drop-in" replacement for Ninja, so presumably it can read .ninja files, and so GN probably didn't need to change much to accommodate it.
Saying that Microsoft is "Rewriting Windows in rust" suggests you might not be as informed as you think... Very specific components with history of performance or security issues are getting ported in a very uncoordinated effort. Windows will be primarily C, C++, and C# for a very long time to come
Also those are two different skill sets. Writing critical sections of an OS is not the same thing as writing a compiler. And completely agree, windows to what I have read, is being deliberate and isn't doing a total rewrite at all. Thanks for chiming in so I could type less.
reply