Skip to contents

Create Databricks SQL Connector Client

Usage

db_sql_client(
  id,
  catalog = NULL,
  schema = NULL,
  compute_type = c("warehouse", "cluster"),
  use_cloud_fetch = FALSE,
  session_configuration = list(),
  host = db_host(),
  token = db_token(),
  workspace_id = db_current_workspace_id(),
  ...
)

Arguments

id

String, ID of either the SQL warehouse or all purpose cluster. Important to set compute_type to the associated type of id.

catalog

Initial catalog to use for the connection. Defaults to NULL in which case the default catalog will be used.

schema

Initial schema to use for the connection. Defaults to NULL in which case the default catalog will be used.

compute_type

One of "warehouse" (default) or "cluster", corresponding to associated compute type of the resource specified in id.

use_cloud_fetch

Boolean (default is FALSE). TRUE to send fetch requests directly to the cloud object store to download chunks of data. FALSE to send fetch requests directly to Databricks.

If use_cloud_fetch is set to TRUE but network access is blocked, then the fetch requests will fail.

session_configuration

A optional named list of Spark session configuration parameters. Setting a configuration is equivalent to using the SET key=val SQL command. Run the SQL command SET -v to get a full list of available configurations.

host

Databricks workspace URL, defaults to calling db_host().

token

Databricks workspace token, defaults to calling db_token().

workspace_id

String, workspace Id used to build the http path for the connection. This defaults to using db_wsid() to get DATABRICKS_WSID environment variable. Not required if compute_type is "cluster".

...

passed onto DatabricksSqlClient().

Details

Create client using Databricks SQL Connector.

Examples

if (FALSE) { # \dontrun{
  client <- db_sql_client(id = "<warehouse_id>", use_cloud_fetch = TRUE)
} # }