Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

WASM JIT implementations tend to be quite a bit different from JavaScript JIT, so that's not really where the perf difference comes from.

First, WASM gets all the heavy AOT optimizations from the middle end of the compiler producing it. At runtime, WASM JIT doesn't start from program source, but from something that's already been through inlining, constant propagation, common subexpression elimination, loop optimizations, dead code elimination, etc. And WASM is already typed, so the JIT doesn't have to bother with inline caching, collecting type feedback, or supporting deoptimization.

Because of that, the only really beneficial work left to do is from the back end (i.e. arch-specific) part of the compiler- basically, register allocation and instruction selection. WASM JIT compilers don't bother trying to find hot loops or functions before optimizing. Instead, they do a fast "streaming" or "baseline" codegen pass for fast startup, and then eagerly run a smarter tier over the whole module and hot-swap it in as soon as possible. (See e.g. https://hacks.mozilla.org/2018/01/making-webassembly-even-fa...)

The perf difference vs native rather comes from the sandboxing itself- memory access is bounds checked, support for threads and SIMD is limited (for now), talking to the browser has some overhead from crossing the boundary into JavaScript (though this overhead will go down over time as WASM evolves), etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: