Skip to contents

Run a job and return the run_id of the triggered run.

Usage

run_job_now(
  client,
  job_id,
  dbt_commands = NULL,
  idempotency_token = NULL,
  jar_params = NULL,
  job_parameters = NULL,
  notebook_params = NULL,
  pipeline_params = NULL,
  python_named_params = NULL,
  python_params = NULL,
  queue = NULL,
  spark_submit_params = NULL,
  sql_params = NULL
)

jobsRunNow(
  client,
  job_id,
  dbt_commands = NULL,
  idempotency_token = NULL,
  jar_params = NULL,
  job_parameters = NULL,
  notebook_params = NULL,
  pipeline_params = NULL,
  python_named_params = NULL,
  python_params = NULL,
  queue = NULL,
  spark_submit_params = NULL,
  sql_params = NULL
)

Arguments

client

Required. Instance of DatabricksClient()

job_id

Required. The ID of the job to be executed.

dbt_commands

An array of commands to execute for jobs with the dbt task, for example 'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run'].

idempotency_token

An optional token to guarantee the idempotency of job run requests.

jar_params

A list of parameters for jobs with Spark JAR tasks, for example 'jar_params': ['john doe', '35'].

job_parameters

Job-level parameters used in the run.

notebook_params

A map from keys to values for jobs with notebook task, for example 'notebook_params': {'name': 'john doe', 'age': '35'}.

pipeline_params

This field has no description yet.

python_named_params

A map from keys to values for jobs with Python wheel task, for example 'python_named_params': {'name': 'task', 'data': 'dbfs:/path/to/data.json'}.

python_params

A list of parameters for jobs with Python tasks, for example 'python_params': ['john doe', '35'].

queue

The queue settings of the run.

spark_submit_params

A list of parameters for jobs with spark submit task, for example 'spark_submit_params': ['--class', 'org.apache.spark.examples.SparkPi'].

sql_params

A map from keys to values for jobs with SQL task, for example 'sql_params': {'name': 'john doe', 'age': '35'}.