Trigger A New Job Run
Arguments
- job_id
The canonical identifier of the job.
- jar_params
Named list. Parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list.
jar_params
cannot be specified in conjunction withnotebook_params
.- notebook_params
Named list. Parameters is passed to the notebook and is accessible through the
dbutils.widgets.get
function. If not specified upon run-now, the triggered run uses the job’s base parameters.- python_params
Named list. Parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.
- spark_submit_params
Named list. Parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.
- host
Databricks workspace URL, defaults to calling
db_host()
.- token
Databricks workspace token, defaults to calling
db_token()
.- perform_request
If
TRUE
(default) the request is performed, ifFALSE
the httr2 request is returned without being performed.
Details
*_params
parameters cannot exceed 10,000 bytes when serialized to JSON.jar_params
andnotebook_params
are mutually exclusive.
See also
Other Jobs API:
db_jobs_create()
,
db_jobs_delete()
,
db_jobs_get()
,
db_jobs_list()
,
db_jobs_reset()
,
db_jobs_runs_cancel()
,
db_jobs_runs_delete()
,
db_jobs_runs_export()
,
db_jobs_runs_get()
,
db_jobs_runs_get_output()
,
db_jobs_runs_list()
,
db_jobs_runs_submit()
,
db_jobs_update()