Skip to contents

Trigger A New Job Run

Usage

db_jobs_run_now(
  job_id,
  jar_params = list(),
  notebook_params = list(),
  python_params = list(),
  spark_submit_params = list(),
  host = db_host(),
  token = db_token(),
  perform_request = TRUE
)

Arguments

job_id

The canonical identifier of the job.

jar_params

Named list. Parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params.

notebook_params

Named list. Parameters is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters.

python_params

Named list. Parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.

spark_submit_params

Named list. Parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.

host

Databricks workspace URL, defaults to calling db_host().

token

Databricks workspace token, defaults to calling db_token().

perform_request

If TRUE (default) the request is performed, if FALSE the httr2 request is returned without being performed.

Details

  • *_params parameters cannot exceed 10,000 bytes when serialized to JSON.

  • jar_params and notebook_params are mutually exclusive.