Different ways to diff data
[Guest blog by Kenny Ning, Freelance Analytics Engineer]
Data diffing is the process of comparing two datasets. It’s an important part of any data transformation pipeline and can help you identify the impact of your code changes to the shape and content of your datasets.
"Datasets" can be csv files stored on your local computer, tables in a database, or even tables across many databases. In this post, we’ll walk through different approaches to diff your datasets depending on what format they are in:
- Local files: git diff
- Tables in a database: dbt test, Datafold’s data diff
- Tables across databases: Datafold's open source data-diff
Yillow: a fictional real estate listings company
Let’s say you’re a data engineer at a real estate listings company called Yillow. The company ingests raw listings data from various Multiple Listing Services (MLSs) and unifies it into a standardized schema so that potential homebuyers can easily browse listings in a slick web UI.
It’s your first day, and you’re pointed to an internal python library that normalizes addresses. Different MLSs have different ways they represent addresses, and this function takes in an MLS source and raw address and transforms it into a standard format:
You get word that the California MLS source is going to change their address delimiter to semicolons instead of commas. You adjust the ca_mls clause in the function to the following:
A day later after you merge your changes, someone notices that some California listings are not being shown in the app. You notice that <span class="code">ca_addresses.csv</span> is getting piped to <span class="code">normalize_address</span> which in turn writes to an intermediate dataset called <span class="code">cleaned_listings.csv</span>
Local files: git diff
You take the following steps to debug:
- Check out main branch
- Run full pipeline
- Compare <span class="code">ca_addresses.csv</span> and <span class="code">cleaned_listings.csv</span> which should return an equal amount of rows
It looks like not all of the addresses in <span class="code">ca_addresses</span> were migrated to semi-colon delimiters after all. In the example above, the first three addresses still have comma delimiters and subsequently were dropped in the pipeline. You submit a small patch and shoot the California MLS an email. In this example, a local data diffing solution like <span class="code">git diff</span> helped us identify a regression introduced by the code refactor.
You can specify the <span class="code">--stat</span> option to get a summary instead:
If you can’t assume your data is sorted, then you’ll need to first sort your datasets like <span class="code">cat cleaned_listings.csv | sort -s</span> and then pass the output as arguments using process substitution <span class="code"><(commands)</span> syntax. Unfortunately <span class="code">git diff</span> doesn’t work with process substitution so you’ll have to use the more minimal <span class="code">diff</span> command like so:
The output is not as readable as <span class="code">git diff</span> but you can see how deletions are marked with a <span class="code">XXXdXXX</span> and changes are marked with a <span class="code">XXXcXXX</span>. Experiment with the myriad options for these tools directly from the command line by typing <span class="code">man git-diff</span> or <span class="code">man diff</span>. For example, try pairing vanilla <span class="code">diff</span> with the <span class="code">--suppress-common-lines</span> and <span class="code">--side-by-side</span> commands.
Right for you if:
- Your data is in a flat file format
- Your data fits in memory
You’ve outgrown if:
- Your data is stored in a database
- Your pipeline takes a long time to run, so the development workflow described above takes too long to iterate
Tables in a database: dbt test
Now let’s assume Yillow has made the migration from flat files to the ⭐ modern data stack ⭐. You’ve traded in your python scripts that download from S3 for dbt jobs that run on BigQuery.
The California MLS source has decided that they don’t like semicolons after all and want to go back to using commas as a delimiter. You adjust the dbt job to the following:
You make a pull request, but an error is thrown in your CI dbt cloud run. It looks like a test called <span class="code">test_normalize_address.sql</span> failed which has the following code:
You can then run this test using dbt test:
As the dbt test output shows, tests fail if more than one result is returned, so the fact that this test failed means there were some addresses that got lost in the <span class="code">normalize_address</span> step.
The great thing about these kinds of SQL tests is that they are run directly on your data warehouse, so they scale well with data sizes. They can be run during development to test your local changes, but can also be run in a CI environment to catch errors before they reach production.
Right for you if:
- Your data is stored in a relational database
- You want an easy / fast way to test if two datasets are the same
You’ve outgrown if:
- You want to compare differences across multiple databases
- You’re looking for richer data diff reports than just a binary “yes/no” result
One final note on diffing tables in a database: <span class="code">dbt test</span> is a very minimal, binary “yes/no” framework for data diffing. If you’re looking for richer data diff reporting, check out these other options:
- dbt-audit-helper package: helper macros that can generate summaries of differences
- Datafold’s data diff: rich, web-based, visual experience offering general table statistics as well as easy exploration of differences across schema, primary keys, and specific column values. This is the Rolls Royce of diff reports 🚗
Tables across databases: Datafold's data-diff tool
A few months pass, and you find out that <span class="code">normalize_address</span> isn’t the problem anymore. The scope of Yillow’s data team has expanded to much more than just managing the data warehouse. Now you’re responsible for the ingestion process as well; you have access to the raw California MLS data that sits in a MySQL instance and have written a pipeline to copy that data over into your BigQuery data warehouse.
You want to make sure that all the data from MySQL is making its way to BigQuery. Datafold has a slick UI for doing data diffs as well as a newly released open source version. Notably, this tool allows you to do data comparisons across databases, something that none of the above solutions can handle.
Assuming your data is making its way into a BigQuery project named <span class="code">yillow_ingest</span> and a dataset named <span class="code">mls</span>, you can then use <span class="code">data-diff</span> as follows:
The <span class="code">--key-column</span> argument specifies the name of the primary key column. The <span class="code">--columns</span> argument tells data-diff to also compare differences in values for the <span class="code">address</span> type. All other options are detailed on github or by typing <span class="code">data-diff --help</span>. For example, you can just get the diff summary by adding the <span class="code">--stats</span> option:
Before Datafold, I’d jump between SQL consoles of the source and target database, running <span class="code">select count(*) from _table_</span> and praying they lined up. If they didn’t, I’d then try ordering by timestamp for both tables and start checking row by row. Or I'd pull down the latest events for each that can fit into memory on my computer and run <span class="code">git diff</span>. All this to say that having a diff tool that can compare datasets across databases is a huge time saver and a welcome addition to any data engineer’s development workflow.
The bottom line
Hopefully you’re inspired by now to incorporate data diffing into your workflow. Start with the basics at your disposal like <span class="code">git diff</span> and <span class="code">dbt test</span>, but once you’re ready to take data quality seriously and are looking for a high quality platform, look no further than Datafold.
Datafold is the fastest way to validate dbt model changes during development, deployment & migrations. Datafold allows data engineers to audit their work in minutes without writing tests or custom queries. Integrated into CI, Datafold enables data teams to deploy with full confidence, ship faster, and leave tedious QA and firefighting behind.