General

Q. What is DLT-META ?

DLT-META is a solution/framework using Databricks Delta Live Tables aka DLT which helps you automate bronze and silver layer pipelines using CI/CD.

Q. What are the benefits of using DLT-META ?

  • With DLT-META customers needs to only maintain metadata like onboarding.json, data quality rules and silver transformations and framework will take care of execution.
  • In case of any input/output or data quality rules or silver transformation logic changes there will be only metadata changes using onboarding interface and no need to re-deploy pipelines.
  • If you have 100s or 1000s of tables then DLT-META speeds up overall turn around time to production as customers needs to just produce metadata

Q. What different types of reader are supported using DLT-META ?

DLT-META uses Databricks Auto Loader, DELTA, KAFKA, EVENTHUB to read from s3/adls/blog storage.

Q. Can DLT-META support any other readers?

DLT-META can support any spark streaming reader. You can override read_bronze() api inside DataflowPipeline.py to support any reader

Q. Who should use this framework ?

Data Engineering teams, who wants to automate data migrations at scale using CI/CD.