You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the Spark documentation (http://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode), Spark 2.1.0 can write recovery state data to Zookeeper, which can then be read by new spark dispatchers during a dispatcher failover, or when restarting the service in Mesosphere.
Currently, if you restart a Spark dispatcher service in Mesosphere the previous dispatcher's job history is lost. It would be nice to persist this data so that it works similarly to the History Server (which persists its data in HDFS or elsewhere) after a restart.
Since the Spark package already does some zookeeper configuration (
According to the Spark documentation (http://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode), Spark 2.1.0 can write recovery state data to Zookeeper, which can then be read by new spark dispatchers during a dispatcher failover, or when restarting the service in Mesosphere.
Currently, if you restart a Spark dispatcher service in Mesosphere the previous dispatcher's job history is lost. It would be nice to persist this data so that it works similarly to the History Server (which persists its data in HDFS or elsewhere) after a restart.
Since the Spark package already does some zookeeper configuration (
spark-build/docker/runit/service/spark/run
Line 10 in a199353
this may only be a matter of exposing the
spark.deploy.recoveryMode
conf value through an env variable in the image and aconfig.json
field.The text was updated successfully, but these errors were encountered: