Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They are in the unique position to enable local AI tools

Nothing special about Apple with regards to AI. M2 beats x86 in power efficiency, but not significantly better than other ARM processors.



Some workloads on M1 absolutely smash other ARM processors in part because of M1's special-purpose hardware. In particular, the undocumented AMX chip is really nice for distance matrix calculations, vector search, embeddings, etc.

Non-scientific example: for inference, whisper.cpp links with Accelerate.framework to do fast matrix multiplies. On M1, one configuration gets ~6x realtime speed, but on a very beefy AWS Gravatron processor, the same configuration only achieves 0.5x realtime, even after choosing an optimal threadcount, even linking with NEON-optimized BLAS. (Maybe I'm doing something wrong though).


I think the parent is referring to the Apple Neural Engine in Apple Silicon, which aren't widely used today (as far as I know)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: