Discussion about this post

User's avatar
Akshay Baura's avatar

i dont fully follow when you say- Unlike systems like Spark or Daft that can distribute work at the query execution level (breaking down individual operations like joins or aggregations), smallpond operates at a higher level. It distributes entire partitions to workers, and each worker processes its entire partition using DuckDB.

In spark too, a partition is processed by a core, right?

10 more comments...

No posts

Ready for more?