Only semi-relevant, but there's also the fact that lower level languages can auto-optimize more deeply -- but that's also more my intuition (would love to get learnt if I'm wrong).
For example, I'd expect that Rust (or rustc I guess) can auto-vectorize more than Node/Deno/etc.
Ahead of Time, perhaps. (Of course the benefit of AOT is that you can take all the time in the world and only slow down the developer cycle without impacting users. In theory you can always build a slower AOT compiler with more optimizations, even/especially for higher level languages like JS. You can almost always trade off more build time and built executable size for more runtime optimizations. High level languages can almost always use Profiler Guided Optimization to do most things low level languages use low level data type optimization paths for.)
A benefit to a good JIT, though, is that you can converge to such optimizations over time based on practical usage information. You trade off less optimized startup paths for Profiler Guided Optimization on the live running application, in real time based on real data structures.
JS has some incredible JITs very well optimized for browser tab life-cycles. They can eventually optimize things at a low level far further than you might expect. The eventually of a JIT is of course the rough trade-off, but this also is well optimized for a lot of the browser tab life-cycle: you generally have an interesting balance of short-lived tabs where performance isn't critical and download size is worth minimizing, versus tabs that are short-lived but you return to often and can cache compiled output so each new visit is slightly faster than the last, versus a few long-lived tabs where performance matters and they generally have plenty of time to run and optimize.
This is why Node/Deno/et al excel in long-running server applications/services (including `--watch` modes) and "one-off"/"single run" build tools can be a bit of a worst case, they may not give the JIT enough time or warning to fully optimize things, especially when they start with no previous compilation cache every time. (The article points out that this is something you can turn on now in Node.)
For example, I'd expect that Rust (or rustc I guess) can auto-vectorize more than Node/Deno/etc.