Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

* You likely have started too many long-running Spark Apps like SparkShell, PySparkShell, PySpark (Jupyter/iPython Notebook), Zeppelin Notebook, Spark Streaming, etc.

* Below is a sample screenshot of the Spark Admin UI when you are in this state

* You need to kill some of the long-running Spark Apps through the Spark Admin UI - usually one of the PySpark (Jupyter/iPython) apps

Have more questions? Submit a request

Comments