The role of a data warehouse (DWH) in data-driven (i.e. actively using data to power human and machine decisions) organization is to accommodate all analytical data, to enable its transformation, and to provide easy and reasonably fast access to people and systems that need that data.
DWH, thus, plays a central role in the data ecosystem. It is also often the most expensive piece of data infra to replace, so it’s important to choose the right solution that can work well for at least 7 years. Since analytics is used to power important business decisions, picking the wrong DWH is a sure way to create a costly bottleneck for your business.
Yet, I see so many teams make decisions on such a critical piece of infrastructure simply following their gut instinct or because “AWS offers it already” or “Facebook uses it” or “that’s what we used at X”.
In this post, I propose six dimensions for evaluating a data warehousing solution and offer my subjective opinion that BigQuery and Snowflake are the best options for a typical analytical use case and company.
Typically, the use cases for a DWH fall into three major categories:
Serve data to consumers:
Use cases NOT in scope:
Ultra-fast queries: (90% of queries < 5 seconds) e.g. for interactive exploration. Such speed typically requires trading off other important aspects such as scalability, so it’s best to use specialized solutions such as QuestDB, Clickhouse, or Druid
Specialized (e.g. geospatial or graph) transformations & ML jobs – there are excellent compute engines like Spark that you leverage to perform a specific type of computation.
As obvious as it may sound, it’s important to pick a DWH that can accommodate as much data and workload as you have now and will have in the future. Data volume grows much faster than the business.
Businesses that choose poorly scalable data warehouses, such as Redshift, pay an enormous tax on their productivity once their DWH cannot grow anymore: queries get backlogged, users are blocked and the company is forced to migrate to a scalable DWH, and at that point, it’s already too late: the migrations are slow (years), painful and almost never complete.
If you are doing a migration at the moment, the following tools may help: CompilerWorks – is a SQL transpiler that lets you instantly convert ETL code between database flavors. Datafold offers a Data Diff feature to instantly compare old vs. new versions of the table.
Importantly, choosing a scalable DWH does not mean paying upfront for capacity that you don’t use: the best products allow you to start almost at $0 and scale as your data grows.
Fit all data that you collect (or will collect), be it terabytes, petabytes or exabytes, with costs growing linearly (i.e. doubling the storage should be not more than 2x more expensive).
There are different types of computational jobs that your DWH needs to run including ingesting, transforming, serving the data.
Compute scalability means:
Red flag: Avoid DWHs that have a physical limit to storage or concurrent queries capacity. “We’ll think about the larger scale once we get there” approach likely means engaging a time bomb.
Decoupled Storage & Compute
As your business grows, you want to be able to add more space to store data or support a larger number of concurrent queries. Systems that couple storage with compute – i.e. data is stored physically on the same nodes that perform the computation (e.g. Druid & Redshift) – are typically much more expensive to scale, and sometimes have hard limits on either compute or storage scalability.
Price elasticity means paying for you use, and the opposite: not paying for excess capacity and idling resources.
Here’s the summary of the pricing structures of the popular DWH products:
Comparing different pricing models is hard, and various benchmarks report different results. However, from a purely structural perspective, usage-based pricing (based on the volume of data stored and queried) is superior to time-based pricing:
You don’t pay for what you don’t use which is important as analytical load typically comes in bursts:
Simply put, easy to get data in and out. By picking a solution that is supported by the major collection, integration, and BI vendors, you will save yourself from a lot of headaches.
The requirements here depend a lot on the use case, but for typical data preparation and analytics tasks, SQL is the most commonly used expression language. We recommend choosing a data warehouse that has first-class support for SQL and supports most frequently used aggregation functions.
Simplicity vs. flexibility Some systems expose interfaces for tuning the performance. For example, Hive has over 250 parameters that are adjustable for every query. This number is clearly beyond any human’s capacity to effectively work with and thus usually massively complicates the query development process for an average user.
Counterintuitively, over-optimizing for query execution speed when choosing a DWH can lead you in the wrong direction. Provided that the DWH is scalable – i.e. can run an arbitrary number of queries concurrently – speed is truly important only during the query development process. For example, when an analyst is building a report or a data engineer is building a new ETL job, every query run blocks their workflow. However, when the query is ready, i.e. proven to work on a small data volume and is ready to be executed against the entire dataset, speed doesn’t matter as much, as the user can switch to other tasks.
I suggest the following targets as a guideline:
For production runs, what matters is predictable query runtime so that the data pipelines can meet SLAs.
At a tera/petabyte scale, data warehousing is all about tradeoffs. You can’t get infinite scalability and sub-second query speeds at the same time. Although, the best products such as Snowflake come close with their smart caching techniques. Instead, shoot for “good enough” speed that can be maintained no matter how big your data and your team grows.
Fivetran’s TPC-DS benchmark shows that most mainstream engines are roughly in the same ballpark in terms of performance.
There can be use cases where speed is the most important factor (such as for interactive dashboards). In this case, you should choose systems optimized specifically for serving queries super fast, such as Druid or Clickhouse.
Modern data warehouses are complex and distributed software that is non-trivial to deploy and run let alone to develop. If you are tempted to go for a new shiny open-source engine, make sure to account for full cost of ownership and carefully evaluate the use case and trade-offs. I recommend going as serverless as possible and focus your attention and engineering resources on building your customer-facing product. Data warehousing in 2020 is a well-commoditized technology, and, unless you are really big to harvest economies of scale, most likely hosting a DWH yourself is a worse option economically.
Also be mindful that while some products such as Redshift market themselves as “database as a service”, they still require a non-trivial amount of ops work: vacuuming, compressing, resizing the cluster can easily require a full-time engineer or two to support.
Ultimately, it seems that for the vast majority of companies, BigQuery & Snowflake are the best data warehousing options. Both are infinitely scalable, require no devops or maintenance, are reasonably fast, offer first-class support for SQL, and work well with the best BI & ETL tools.
Some important distinctions: