Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set the resource of the spark cluster when submit a spark job? #27

Open
guoyuhaoaaa opened this issue Feb 19, 2019 · 2 comments
Open

Comments

@guoyuhaoaaa
Copy link

When I submit a spark job like this spark-submit --class org.apache.spark.examples.SparkPi --driver-memory 1g --executor-memory 1g --executor-cores 1 --queue thequeue examples/target/scala-2.11/jars/spark-examples*.jar 10

What is the API of the settings of "--executor-memory 1g --executor-cores 1" in the spark-jobs-rest-client ??
Can you give me a example??

@ywilkof
Copy link
Owner

ywilkof commented Feb 20, 2019

Hi, give the following parameters in the environment map that the client accepts at creation time: spark.executor.memory and spark.executor.cores

@guoyuhaoaaa
Copy link
Author

Thanks for your response, I have solved that problem. Now I meet a new question, How to get the println message of the driver when run a spark job.
Just like " JobStatusResponse jobStatus = client
.checkJobStatus()
.withSubmissionIdFullResponse(submissionId);"???
Look forward to your response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants