Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cache misses, TLB flushes, and pipeline stalls would beg to differ with you.


Not really, no. For plenty of problems, the most efficient known algorithms will easily outperform a naive algorithm, even if running on drastically inferior hardware. 'Drastically' in this context can mean the slower approach taking billions of times longer.

Wikipedia gives an example. [0] Many courses on algorithms and/or complexity theory emphasise such examples in their introductions.

That said I don't know why jhoechtl thought that was the moral of this blog post, which doesn't illustrate that point at all.

[0] https://en.wikipedia.org/wiki/Computational_complexity#Use_i...


> 'Drastically' in this context can mean the slower approach taking billions of times longer.

Standupmaths - "Someone improved my code by 40,832,277,770%": https://www.youtube.com/watch?v=c33AZBnRHks (OK, 41 billion percent is only 0.41 billion times better, but still.)


Mea culpa.

All I was trying to say is that beyond a point, it's not _just_ the algorithm, you need to pay attention to mechanical sympathy.

Do well-designed algorithms outperform naive algorithms? Almost always. Is designing the algorithm well sufficient? I'd think not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: