Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, if you know how much you allocate per minute (and don’t exceed your budget) you just buy enough RAM and it’s fine.


This will decrease performance because of reduced locality. Maybe increased jitter because of TLB misses.


Compared to what, running a garbage collector?


No, compared to not doing so many allocations that freeing them is time consuming or expensive. Having allocations slow a program down means that there are way too many, probably due to being too granular and being in a hot loop. On top of that it means everything is a pointer and that lack of locality will slow things down even further. The difference between allocating many millions of objects and chasing their pointers and doing a single allocation of a vector and running through that can easily be 100x faster.


Probably? Locality becomes fairly important at scale. That’s why there’s a strong preference for array-based data structures in high-performance code.

If I was them I’d be using OCaml to build up functional “kernels” which could be run in a way that requires zero allocation. Then you dispatch requests to these kernels and let the fast modern generational GC clean up the minor cost of dispatching: most of the work happens in the zero-allocation kernels.


(this comment was off topic, sorry)


Is this relevant to OCaml?


ha ha oops I got confused




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: