Skip to contents

Creates a new data processing pipeline based on the requested configuration. If successful, this method returns the ID of the new pipeline.

Usage

create_pipeline(
  client,
  allow_duplicate_names = NULL,
  catalog = NULL,
  channel = NULL,
  clusters = NULL,
  configuration = NULL,
  continuous = NULL,
  development = NULL,
  dry_run = NULL,
  edition = NULL,
  filters = NULL,
  id = NULL,
  libraries = NULL,
  name = NULL,
  notifications = NULL,
  photon = NULL,
  serverless = NULL,
  storage = NULL,
  target = NULL,
  trigger = NULL
)

pipelinesCreate(
  client,
  allow_duplicate_names = NULL,
  catalog = NULL,
  channel = NULL,
  clusters = NULL,
  configuration = NULL,
  continuous = NULL,
  development = NULL,
  dry_run = NULL,
  edition = NULL,
  filters = NULL,
  id = NULL,
  libraries = NULL,
  name = NULL,
  notifications = NULL,
  photon = NULL,
  serverless = NULL,
  storage = NULL,
  target = NULL,
  trigger = NULL
)

Arguments

client

Required. Instance of DatabricksClient()

allow_duplicate_names

If false, deployment will fail if name conflicts with that of another pipeline.

catalog

A catalog in Unity Catalog to publish data from this pipeline to.

channel

DLT Release Channel that specifies which version to use.

clusters

Cluster settings for this pipeline deployment.

configuration

String-String configuration for this pipeline execution.

continuous

Whether the pipeline is continuous or triggered.

development

Whether the pipeline is in Development mode.

dry_run

This field has no description yet.

edition

Pipeline product edition.

filters

Filters on which Pipeline packages to include in the deployed graph.

id

Unique identifier for this pipeline.

libraries

Libraries or code needed by this deployment.

name

Friendly identifier for this pipeline.

notifications

List of notification settings for this pipeline.

photon

Whether Photon is enabled for this pipeline.

serverless

Whether serverless compute is enabled for this pipeline.

storage

DBFS root directory for storing checkpoints and tables.

target

Target schema (database) to add tables in this pipeline to.

trigger

Which pipeline trigger to use.