Second Hurdle of My Spark Adventure: Spark
Spark
1) It is important to correctly visualize the anatomy of a Spark cluster. The key components are: workers (which spawn executors) and the spark coordinator (which is independent of the spark driver).

There are multiple ways to configure a cluster. I run a standalone cluster which means I don’t use Hadoop or YARN cluster manager. In my case, by making my Mac the master node I’ve made it the cluster manager; the executors spin up inside of the Docker containers on the Raspberry Pis