jar_params |
Option<Vec> |
A list of parameters for jobs with Spark JAR tasks, for example \"jar_params\": [\"john doe\", \"35\"] . The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now , it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {\"jar_params\":[\"john doe\",\"35\"]} ) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. |
[optional] |
notebook_params |
Option<::std::collections::HashMap<String, serde_json::Value>> |
A map from keys to values for jobs with notebook task, for example \"notebook_params\": {\"name\": \"john doe\", \"age\": \"35\"} . The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now , the triggered run uses the job’s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example {\"notebook_params\":{\"name\":\"john doe\",\"age\":\"35\"}} ) cannot exceed 10,000 bytes. |
[optional] |
python_params |
Option<Vec> |
A list of parameters for jobs with Python tasks, for example \"python_params\": [\"john doe\", \"35\"] . The parameters are passed to Python file as command-line parameters. If specified upon run-now , it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {\"python_params\":[\"john doe\",\"35\"]} ) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis. |
[optional] |
spark_submit_params |
Option<Vec> |
A list of parameters for jobs with spark submit task, for example \"spark_submit_params\": [\"--class\", \"org.apache.spark.examples.SparkPi\"] . The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now , it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {\"python_params\":[\"john doe\",\"35\"]} ) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis. |
[optional] |
python_named_params |
Option<::std::collections::HashMap<String, serde_json::Value>> |
A map from keys to values for jobs with Python wheel task, for example \"python_named_params\": {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"} . |
[optional] |
pipeline_params |
Option<crate::models::RunParametersPipelineParams> |
|
[optional] |
sql_params |
Option<::std::collections::HashMap<String, serde_json::Value>> |
A map from keys to values for SQL tasks, for example \"sql_params\": {\"name\": \"john doe\", \"age\": \"35\"} . The SQL alert task does not support custom parameters. |
[optional] |
dbt_commands |
Option<Vec> |
An array of commands to execute for jobs with the dbt task, for example \"dbt_commands\": [\"dbt deps\", \"dbt seed\", \"dbt run\"] |
[optional] |