Databricks and dbt have recently joined forces, and in this blog we provide an overview of each tool, what synergies exist from this partnership and what it means for your data engineering pipeline, and how you can get started with using the tools together today.

Is collaboration across your data teams difficult? Is data drift an issue for your organization? Are you looking for a solution that will allow your entire data team to work in a single platform—whether it be to transform your data or to build, test, and deploy code?

The Databricks and dbt partnership provides the solution you need to bring your analytics team and their tooling to the same platform trusted by their data science and data engineering counterparts. Each tool is unique on its own—but together, they help to simplify your modern data stack.

Databricks: A Cloud-Based Data Engineering and Machine Learning Platform

If you haven’t heard of Databricks by now, you are missing out. Databricks is an end-to-end, unified data platform that stores, transforms, and even displays data. It implements a data lakehouse architecture, which bridges the gap between a data lake (think large amounts of data in any format—structured or unstructured) and a data warehouse (highly governed, curated data optimized for reporting).

Databricks is unique in that it allows for the entire data lifecycle to exist within one ecosystem, which differs from competitors that often require multiple platforms and tools when building out a data pipeline.

What are Databrick’s Strengths?

Databricks is one of the most widely used advanced analytics platforms in the world. It not only serves to bridge the gap between a data warehouse and a data lake, but Databricks also is:

  • An all-in-one platform: There are no requirements for separate environments and tools for the initial storage of raw data, data engineering, storage of the final curated layer, and advanced machine learning. Databricks does it all.
  • Cost-effective: Databricks is built on the concept of being data lake centric. Data lakes are cheap and by having fewer environments to hand data between, it allows for simplicity and cost savings.
  • Fast: Under the covers, Databricks runs on top of an Apache Spark engine. Apache Spark is an open-source engine focused on parallelism (splitting jobs into lots of small pieces and running them at the same time).
  • Cloud-agnostic: Databricks is easy to set up and work in any of the big three cloud providers: AWS, GCP, and Azure.
  • Open source: Since Apache Spark is open-source, Databricks benefits from the power of the community. It also means you won’t be stuck writing code in some proprietary language that would limit your ability to integrate with other tools in the future.

What are Databrick’s Limitations?

Despite its many strengths, data transformation is one area where clients have struggled using Databricks. The platform provides myriad ways to transform data, which is great in terms of flexibility, but it means development teams need to have a clear, defined framework in place which can take precious time and resources to implement properly.

dbt (Data Build Tool): A Transformation Framework for Your Data Pipelines

In the relatively new era of the modern cloud data warehouse, dbt was one of the first tools in the market to micro-focus on transforming data. Thus, dbt rose in popularity as tools like Redshift, BigQuery, and Snowflake accelerated as market leaders for data management in the cloud. dbt gives a structured approach for teams to develop like engineers by leveraging git (source and version control) and writing SQL for transformations on top of a data warehouse.

Significant reasons dbt gained so much popularity recently is that it embraces data democratization, writing transformations in SQL, and the value of the open-source project. Every cloud data warehouse uses SQL, which data analysts and data engineers mutually understand. This opened possibilities for teams to work in tandem, while allowing a full community to help drive product development and innovation.

What are (Data Build Tool) dbt’s Strengths?

The dbt framework makes it simpler to develop, test, document, and deploy your transformations in the cloud, while applying proven engineering best practices and workflows. Data teams using dbt find unmatched collaboration, time to value, and a cohesive understanding of the data warehouse throughout an organization. Some specific advantages of dbt Cloud to highlight are the Version Control Integration, the centralized documentation hub, the jobs, and its testing capabilities.

Git Integration

Within dbt cloud, it gives anyone without version control experience a guided process to create branches, commit development work, and open pull requests. This is critical for many organizations to enforce version control best practices.

Screen shot of dbt Cloud’s Git interface with green bar illustrating open pull request.

dbt Cloud’s guided Git interface provides structure to commit changes, pull down changes, open pull requests, and make new branches.

Centralized Documentation

dbt offers a centralized documentation hub for definitions and metadata on all your fields within tables/views. By simply writing down definitions in yaml files, you can have an entire data warehouse documented in one spot. Additionally, dbt automatically tracks lineage for models, which is extremely beneficial for understanding the flow of your environment.

Screen shot of dbt’s documentation hub with an example table including columns, data types, definitions, and tests.

dbt’s documentation hub for an example table including all columns, data types, definitions, and tests applied.

Blue box with example of dbt’s lineage for models in a tiered architecture environment.

Example of dbt’s lineage for models in a tiered architecture environment.

Jobs

dbt Cloud makes it simple to orchestrate your environment. dbt has excellent CI/CD and production scheduling abilities for your pipelines. This has saved teams numerous hours of development and maintenance work on their pipelines.

Screen shot of an example of a job orchestrating dbt models.

An example of a job orchestrating your dbt models. This keeps logs for the commands you specify in a given job.

Testing

dbt comes out-of-the-box with advanced testing capabilities. There are two primary types of tests in dbt—schema tests and data tests. Schema tests are used many times to catch common errors like nulls, grain, referential integrity, etc. You can use some of dbt’s predefined tests or even write your own. On the other hand, you can also write data tests that apply to a single table, which can be beneficial to track specific use cases. This functionality has proven essential to organizations to uncover problems early.

Screen shot of an example yaml file showing the simplicity to apply tests using dbt.

An example yaml file showing the simplicity to apply tests using dbt.

These are just a few of the many impressive features of dbt. This makes transforming your data a simple process.

What are (Data Build Tool) dbt’s Limitations?

The typical cloud data warehouses dbt operates on top of have been Redshift, BigQuery, and Snowflake. Those platforms are typically used to solve business intelligence (BI) problems, which means that data engineers and data analysts have collaborated, while other teams have separately attacked problems that a data lake could solve. Thus, data science teams for example, have often been siloed from the data analytics teams in many organizations.

Recently, Databricks has impressively accelerated their SQL capabilities and evolved the data lakehouse concept, offering a unique opportunity to further unite data teams at an entirely new level.

Only a few years ago, it was difficult to imagine a world where an entire data team could use a single platform to query both their data warehouse and data lake. dbt and Databricks have made this an achievable reality.

Databricks and dbt Partnership: Welcoming Analytics Engineers to the Data Lakehouse

Databricks and dbt have partnered together to simplify the data lakehouse. Although Databricks is a fantastic platform for data teams to get the most out of their data, it can be cumbersome to use without a defined framework for building, testing, and deploying code. The solution here is to use dbt on top of Databricks.

From a high-level architectural perspective, dbt works on top of Databricks once the raw data has already been loaded into the Delta Lake. From there, developers write their code within a dbt environment, however the code is pushed down and executed using the compute from Databricks and Apache Spark. It’s important to note that the data is not stored or processed by dbt, everything is contained within the Databricks environment.

Flow chart illustrating how the Databricks and dbt partnership helps your data engineering pipeline.

dbt transforms your raw data, while applying testing, handling orchestration, and more on top of the data lakehouse. The cleaned data then can be used for both data science and business intelligence solutions. Photo: Databricks


Learn What dbt Can Bring to Databricks

The Combined Strengths of a Databricks and dbt Partnership

The new partnership between Databricks and dbt is an exciting prospect for companies that are currently using Databricks without any sort of development framework, and even for companies that may be using dbt with another platform but are considering a switch to Databricks. Here are examples of why:

  • Architecture is unified. The architecture within an organization can now be hosted on one centralized platform, allowing data teams to collaborate using a single source of truth, as well as be more naturally integrated with each other. Data science teams that depend on data cleansing from analytics engineers will have minimal downtime to access new development, and this will also control costs by reducing redundant infrastructure platforms.
  • Faster time to value. With this partnership, data teams can develop and release their code quickly (up to three times faster than with just traditional methodologies) using SQL.
  • You get the best features of dbt Cloud to simplify your development process. Including, but not limited to a browser-based IDE, job scheduling and orchestration, logging and alerting, a simplified git workflow, and integrated documentation.
  • Hiring new team members will be simpler. Databricks traditionally has been viewed as a tool that requires an advanced technical skillset. Many users are fluent in Scala, Python, and R. dbt allows a structured way for individuals that only understand SQL to excel writing transformations using SparkSQL.
  • You can maximize your innovation and flexibility through open source. dbt and Databricks are already fantastic products as they exist today. dbt and Apache Spark (which Databricks is built off) are also both open source with massive supporting communities. Since both projects are still growing rapidly, you will be able to take advantage of endless new features in the coming years. Additionally, you avoid vendor lock-in because you are free to use other platforms to ingest, transform, and store your data. If you decide to move to another platform years down the road, the fact that the codebase will be written using SQL or Python makes it easier to migrate that code over to other platforms.

Talk to an expert about your dbt or Databricks needs.

Contact us today to get started

How to Get Started with Databricks and dbt to and Enable Analytics Engineering

Now that we’ve convinced you a partnership between Databricks and dbt is a good thing, let’s talk about how you can dive in and get started.

Step 1: Determine if Databricks + dbt is the right fit for you.

Before jumping right in and screaming “take my money, Databricks and dbt!”, it’s first worth asking if it even makes sense for your business. While we think this is a great combination, you should consider a few questions before jumping in headfirst:

  1. Do you already have an existing Databricks environment?
  2. Is your data engineering team skilled with, or prefer using SQL?
  3. Does your team struggle with rapid deployment, documentation, and/or quality control?
  4. Does your team already utilize dbt with another platform and are you looking to migrate to a new one?

If you’ve answered yes to any of these questions, then adding dbt to your current stack could be a great investment! If, however, you find yourself in the following situations, then it could be worth reconsidering:

  • You already have an existing data warehouse that works well and is performant and don’t have time or budget to migrate to a new platform.
  • Your data engineering team primarily uses Python (or any language other than SQL).

Step 2: Understand the ins and outs of both Databricks and dbt.

If you are interested in learning more about Databricks or dbt and getting hands-on experience, there are several resources available to you.

Talk With a Data Analytics Expert

Simon Collis Simon is a Managing Consultant based out of Portland, Oregon. He is highly involved in our Data Management service line and has experience leading data engineering and data architecture projects for our clients. Outside of work, Simon enjoys playing tennis and enjoying all the hiking and outdoor activities the Pacific Northwest has to offer.
Subscribe to

The Insider

Sign up to receive our monthly newsletter, and get the latest insights, tips, and advice.