Serving and consumption
Databricks provides multiple patterns for serving data to downstream consumers, from BI dashboards to ML inference to real-time streaming. This page covers SQL warehouses, model serving, data sharing, and streaming output patterns.
SQL warehouses for BI and analytics
Use Databricks SQL Warehouses as the dedicated serving layer for BI and analytics workloads.
- Serverless SQL Warehouses (recommended) - On-demand, instant access with automatic scaling
- Classic SQL Warehouses - For specific configurations or dedicated resources
Connect via JDBC/ODBC drivers, SQL Statement Execution API, or Databricks Connect.
Documentation: SQL Warehouses | Integration Patterns
Model serving
For ML model inference, use Mosaic AI Model Serving:
- Deploy models registered in Unity Catalog
- Use Foundation Model APIs for LLM inference
- Create custom model endpoints for specialized workloads
Documentation: Model Serving
Data sharing
Delta Sharing
Use Delta Sharing to securely share data with external consumers:
- Databricks-to-Databricks sharing for full feature support
- Open sharing protocol for non-Databricks consumers
Lakehouse Federation
Use Lakehouse Federation to query data in external systems without copying.
Documentation: Delta Sharing | Lakehouse Federation
Streaming output
For real-time data delivery, use Structured Streaming or Lakeflow SDP sinks to write to external destinations.
Structured Streaming - Near real-time processing with exactly-once guarantees. Common sinks:
- Delta Lake tables
- Message buses and queues (Kafka, Event Hubs)
- Key-value databases
SDP sinks - Declarative output from Lakeflow pipelines:
- LDP sinks for Unity Catalog tables, Kafka, Event Hubs
- Python custom sinks for arbitrary data stores
- ForEachBatch for multiple targets or custom transformations
Documentation: Structured Streaming | SDP Sinks
What's next
- Learn about AI capabilities for model serving
- Review data ingestion patterns
- Explore OLTP with Lakebase for transactional workloads