Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1) Yes Scala and JVM is fast. If we could just use that to clean up a feed on a single box that would be great. The problem is calling the Spark API creates a lot of complexity for developers and runtime platform which is super slow. 2) Yes for the few feeds that are a TB we need spark. The platform really just loads from hadoop transforms then saves back again.


a) You can easily run Spark jobs on a single box. Just set executors = 1.

b) The reason centralised clusters exist is because you can't have dozens/hundreds of data engineers/scientists all copying company data onto their laptop, causing support headaches because they can't install X library and making productionising impossible. There are bigger concerns than your personal productivity.


> a) You can easily run Spark jobs on a single box. Just set executors = 1.

Sure but why would you do this? Just using pandas or duckdb or even bash scripts makes your life is much easier than having to deal with Spark.


For when you need more executors without rewriting your logic.


Using a Python solution like Dask might actually be better, because you can work with all of the Python data frameworks and tools, but you can also easily scale it if you need it without having to step into the Spark world.


But Dask is orders of magnitude slower to Spark.

And you can still use Python data frameworks with Spark so not sure what you're getting.


Re: b. This is a place where remote standard dev environments are a boon. I'm not going to give each dev a terabyte of RAM, but a terabyte to share with a reservation mechanism understanding that contention for the full resource is low? Yes, please.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: