Skip to main content

Installation

Table of Contents


Pre-requisites

  1. Install Databricks CLI - Ensure that you have the Databricks Command-Line Interface (CLI) installed on your machine. Refer to the installation instructions provided for Linux, MacOS, and Windows, available here.

Installing the Databricks CLI in different OS:

macos-databricks-cli-install
  1. Configure Databricks CLI - Details can be found here. Additionally, Lakebridge requires the profile used for the Databricks CLI to specify a cluster_id, to do this, you can either:
  • Edit your ~/.databrickscfg file directly and enter a cluster_id for the profile you're using or
  • The flag --configure-cluster gives you the prompt to select the cluster_id from the available clusters on the workspace specified on the selected profile.
databricks configure --host <host> --configure-cluster --profile <profile_name>
  • Alternatively you can use the environment variable DATABRICKS_CLUSTER_ID to set the cluster id you would want to use for your profile before running the databricks configure command.
export DATABRICKS_CLUSTER_ID=<cluster_id>
databricks configure --host <host> --profile <profile_name>
  1. Python - Verify that Python 3.10 or above is installed
  • Windows - Install python from here. Your Windows computer will need a shell environment (GitBash or WSL)
  • MacOS/Unix - Use brew to install python in macOS/Unix machines

Check Python version on Windows, macOS, and Unix

check-python-version
  1. Java - Verify that Java 11 or above is installed. This is required for the Morpheus transpiler

[back to top]


Install Lakebridge

Upon completing the environment setup, install Lakebridge by executing the following command:

databricks labs install lakebridge

This will install Lakebridge using the workspace details set in the DEFAULT profile. If you want to install it using a different profile, you can specify the profile name using the --profile flag.

databricks labs install lakebridge --profile <profile_name>

To view all the profiles available, you can run the following command:

databricks auth profiles
lakebridge-install

Verify Installation

Verify the successful installation by executing the provided command; confirmation of a successful installation is indicated when the displayed output aligns with the example below:

Command:

databricks labs lakebridge --help

Should output:

Code Transpiler and Data Reconciliation tool for Accelerating Data onboarding to Databricks from EDW, CDW and other ETL sources.

Usage:
databricks labs lakebridge [command]

Available Commands:
aggregates-reconcile Reconcile source and target data residing on Databricks using aggregated metrics
analyze Analyze existing non-Databricks database or ETL sources
configure-database-profiler Configure database profiler
configure-reconcile Configure 'reconcile' dependencies
describe-transpile Describe installed transpilers
install-transpile Install & optionally configure 'transpile' dependencies
reconcile Reconcile source and target data residing on Databricks
transpile Transpile SQL/ETL sources to Databricks-compatible code

Flags:
-h, --help help for lakebridge

Global Flags:
--debug enable debug logging
-o, --output type output type: text or json (default text)
-p, --profile string ~/.databrickscfg profile
-t, --target string bundle target to use (if applicable)

Use "databricks labs lakebridge [command] --help" for more information about a command.

[back to top]


Install Transpile

Upon completing the environment setup, you can install the out of the box transpilers by executing the following command. This command will also prompt for the required configuration elements so that you don't need to include them in your command-line call every time.

databricks labs lakebridge install-transpile
transpile-install
tip

Override the default[Bladebridge] config:

There is an option for you to override the default config file that Lakebridge uses for converting source code from dialects like datastage, synapse, oracle etc. During installation you may use your own custom config file and Lakebridge will override the config with the one you would provide. You can only setup this override during installation.

Specify the config file to override the default[Bladebridge] config during installation:

Specify the config file to override the default[Bladebridge] config - press <enter> for none (default: <none>): <local_full_path>/custom_<source>2databricks.json

Verify Installation

Verify the successful installation by executing the provided command; confirmation of a successful installation is indicated when the displayed output aligns with the example output:

Command:

databricks labs lakebridge transpile --help

Should output:

Transpile SQL/ETL sources to Databricks-compatible code

Usage:
databricks labs lakebridge transpile [flags]

Flags:
--catalog-name name (Optional) Catalog name, only used when validating converted code
--error-file-path path (Optional) Local path where a log of conversion errors (if any) will be written
-h, --help help for transpile
--input-source path (Optional) Local path of the sources to be convert
--output-folder path (Optional) Local path where converted code will be written
--overrides-file path (Optional) Local path of a file containing transpiler overrides, if supported by the transpiler in use
--schema-name name (Optional) Schema name, only used when validating converted code
--skip-validation string (Optional) Whether to skip validating the output ('true') after conversion or not ('false')
--source-dialect string (Optional) The source dialect to use when performing conversion
--target-technology string (Optional) Target technology to use for code generation, if supported by the transpiler in use
--transpiler-config-path path (Optional) Local path to the configuration file of the transpiler to use for conversion

Global Flags:
--debug enable debug logging
-o, --output type output type: text or json (default text)
-p, --profile string ~/.databrickscfg profile
-t, --target string bundle target to use (if applicable)

[back to top]


Configure Reconcile

Once you're ready to reconcile your data, you need to configure the reconcile module.

databricks labs lakebridge configure-reconcile
reconcile-configure

SQL Warehouse for Reconcile

While configuring the reconcile properties, lakebridge by default creates a SQL warehouse. lakebridge uses user profile to authenticate to any Databricks resource and hence if the user running this command doesn't have permission to create SQL warehouse, the configure-reconcile would fail. In this case users can provide the warehouse_id of an already created SQL warehouse that they have atleast CAN_USE permission on in the databricks profile (~/.databrickscfg) using which they are running the lakebridge commands and lakebridge would use that warehouse to complete the reconcile configuration instead of trying to create a new one.

This is how the profile would look like:

[profile-name]
host = <your-workspace-url>
...
warehouse_id = <your-warehouse-id>

Verify Configuration

Verify the successful configuration by executing the provided command; confirmation of a successful configuration is indicated when the displayed output aligns with the example screenshot provided:

Command:

 databricks labs lakebridge reconcile --help

Should output:

Reconcile source and target data residing on Databricks

Usage:
databricks labs lakebridge reconcile [flags]

Flags:
-h, --help help for reconcile

Global Flags:
--debug enable debug logging
-o, --output type output type: text or json (default text)
-p, --profile string ~/.databrickscfg profile
-t, --target string bundle target to use (if applicable)

[back to top]