top | item 30938109

Show HN: MetricFlow – open-source metric framework

98 points| nicholashandel | 3 years ago |github.com

Hi HN community, I’m Nick, co-founder/CEO of Transform.co. I’m thrilled to share MetricFlow, an open-source metric creation framework: https://github.com/transform-data/metricflow

MetricFlow strives to make what has historically been an extremely repetitive process, writing SQL queries on core normalized data models, much more DRY. MetricFlow consolidates the definitions for joins, aggregations, filters, etc., and programmatically generates SQL to construct data marts. You can think of it like LookML, but more powerful and ergonomic (and open source!). The project has three components:

1. MetricFlow Spec: The specification encapsulates metric logic in a more reusable set of abstractions: data_sources, measures, dimensions, identifiers, metrics, and materializations.

2. DataFlow Planner: The Query Planner is a generalized SQL constructor. We take in data sources (ideally normalized data models) and generate a graph of data transformations (a flow, if you will) – joins, aggregations, filters, etc. We take that graph and render it down to db-specific SQL while optimizing it for performance and legibility.

3. MetricFlow Interfaces: The CLI and Python SDK rely on the flexibility of the Spec and Planner to build just about any query you could ask for on top of your data warehouse.

These components enable novel features that other semantic layers struggle to support today:

- MetricFlow enables the user to traverse the entire graph of a company’s data warehouse without confining their analysis to pre-built data models (dbt), Explores (in Looker), or Cubes (in lots of tools).

- The Metric abstraction allows the construction of complex metrics that traverse the graph described above to rely on multiple data sources. We support several common metric types today, and adding more is a critical part of the open-source roadmap.

- The Materialization abstraction allows users to define and then programmatically generate data marts that rely on a single DRY expression of the metrics and dimensions.

MetricFlow is open source(https://github.com/transform-data/metricflow) and distributed through pypi (`pip install metricflow`). You can set up (`mf setup`) a set of sample configs and try out a tutorial (`mf tutorial). The docs are all here(https://docs.transform.co/docs/overview/metricflow-overview). We’d love contributions on GitHub. We’re adding new Issues to share our roadmap in the coming days, but feel free to open your own.

We’re also opening up a Slack community(https://community.transform.co/metricflow-signup) to talk about the project and, more generally, metric tooling.

Let us know what you think – we’ll be here answering any questions!

26 comments

order

mjirv|3 years ago

This is awesome, though I would love some more detail in the documentation.

What’s the quick pitch for why I should use this instead of Cube or dbt’s metrics layer?

nicholashandel|3 years ago

I think it’s probably best to talk about this comparison in three areas:

Semantics - The MetricFlow spec allows the construction of a much broader range of metrics with much less expression of logic or duplication of that logic than dbt or Cube.

Performance - MetricFlow generates queries that rivals the optimizations of a skilled Data Engineer and builds pre-aggregated tables similar to Cube while dbt builds a static query from a jinja macro.

Interfaces - Cube has some great interfaces for frontend developers, dbt just generates SQL at this point, and MetricFlow has a Python and CLI . The hosted version, Transform, comes with a SQL and GraphQL Interface but that is beyond the scope of the OSS project.

seektable|3 years ago

It looks like MetricFlow shines in constructing SQL queries on-demand, which means that it should be directly used by a BI tool, am I right with this?.. Generation of the static SQL (with CLI) for each report doesn't seem very usable on practice.

In other words, BI tools needs to have a special connector that automatically utilizes MetricFlow Python API (or CLI). What BI tools already can use MetricFlow in this way (open-source part of the project)?

Actually I'm asking about that as a BI tool vendor. We have added an ability to use custom connectors (web API) so potentially this kind of connector can use MetricFlow for SQL generation.

theboat|3 years ago

Thank you for open sourcing this. More competition in the budding metrics ecosystem is good for end users.

It seems like you think MetricFlow should be the data mart layer and not just the metrics layer. If that's true...why? Why would I join my fact and dimension tables in metricflow instead of in dbt? One of the value adds of dbt is that it centralizes business logic in a single place. Joins are business logic. The industry seems to be moving towards creating very wide data mart tables in dbt and surfacing them to the semantic layer 1:1, or building the metrics layer on top of them.

tlento|3 years ago

I'd say we think MetricFlow should be able to provide consistent, correct answers to reasonable queries end users of the metric model might ask. To do this across the various data warehouse layouts our users are likely to encounter we must necessarily provide support for dimensional joins. This doesn't mean MetricFlow should displace data mart services - to the contrary, I contend MetricFlow works best when layered on top of a warehouse built on centralized logic for managing its data layout. As an example, we generally push our customers to rely on the sql_table data source definition and push any sql_query constructs down to whatever warehouse management layers they have in place.

That said, you need to support joins, at least in some limited scope, in the semantic metric layer for it to be broadly useful. Consider this scenario - you have your dbt models producing wide tables for reasonable measure/dimension queries, and you have MetricFlow configs for the metric and dimension sets available in your data mart. Now imagine you've also got your finance team hooked up to a Google Sheets connector, and they're looking at revenue and unique customers by sales region. Cool, your wide table has that built in, no joins needed.

But what if they want something new? Let's say they want to know how they're doing against the target addressable market in each country. Should they have to submit a ticket to the data engineering team to add customer.country.market_size to your revenue table? Or should they be able to do "select revenue by customer__country__market_size" and get the report they need?

Our position is that we want to facilitate the latter - people getting what they need and knowing, as long as it's been defined properly in the model, that it's going to produce reasonable results. If your particular organization wants all of those joins run through a data mart ticket queue and surfaced as fully denormalized underlying tables that's fine by us, but most likely that's not what you want. You'd rather have some visibility into the types of joins people are requesting and then build out your data mart to more efficiently serve the requests people have on the ground, while still allowing them to ask new questions of the data without a long development feedback loop.

XCSme|3 years ago

It's a bit unclear how would I go with integrating MetricFlow with https://uxwizz.com for example that uses a MySQL database to store analytics data. From the docs, I don't really understand how it actually "understands" the underlying SQL database and how to retrieve the data I need. It feels like I have to write the query to get the data I want, but in a different syntax. Is there any point to use MetricFlow if you only have one data source?

tlento|3 years ago

Three things:

First, MetricFlow does not currently support MySQL. We launched with support for BigQuery, Redshift, and Snowflake. I have opened an issue to add support for MySQL (and similar issues for other SQL engines are coming): https://github.com/transform-data/metricflow/issues/27

Second, what we call a data source is more similar to a table in a database, rather than the underlying database service itself. Metricflow itself is useful when you're using a single SQL engine - indeed, that's all we support today - but it is most useful when you're in a world where joins are a thing. That said, if you have one big data table you might still find it useful to have declarative metric definitions defined in Metricflow. Suppose, for example, you had a big NoSQL style table filled with JSON objects. You might define a few data sources that normalize those JSON objects into top level elements (identifiers, dimensions, aggregated measures) using the sql_query data source config attribute, and then that'd allow you to support structured queries on the data consumption end while pushing unstructured blobs from your application layer. This will be slow at query time, and only as reliable as the level of discipline exerted in your application development workflow, but it's possible.

Third, if we did support MySQL you'd basically connect to it via standard connection parameters - we have a config file where you can store the required information and then we'll manage the connections for you. However, I'm not familiar with uxwizz, and a quick perusal of their documentation did not turn up how one goes about connecting to the underlying DB. It's likely I just missed this, but at any rate I don't know how it is done. If they don't support standard MySQL client connections you'd need to write an adapter of some kind against whatever DB connection APIs they provide, in which case you'd likely need to roll a custom implementation of MetricFlow's SqlClient interface and initialize the MetricFlowEngine with that.

pstoll|3 years ago

Cool. Like open source Looker.

We adopted Looker at $previous_job. Then they got bought by Google, which was great for us as we were becoming a big GCP customer. I strongly encouraged google / looker team to at least open source their LookMl (looker modeling language - equivalent to MQL). They couldn’t figure it out.

This type of metric definition is so empowering for businesses. Not enough engineers grok why this is useful.

hintymad|3 years ago

Interested in why it's useful too. If the same SQL is used in multiple places and cause confusion, isn't it an organizational problem instead of a technical problem? For instance, what if two teams create two different and conflicting metric definitions to answer the same question? It's like turtles all the way down and how could we prevent diversion of query definitions even if we have a perfect metric definition system?

fortysixdegrees|3 years ago

Could you elaborate on why it's useful. I don't get it but I'm intrigued

mritchie712|3 years ago

Is MetricFlow what Transform (the product) uses under the hood?

nicholashandel|3 years ago

Yes! MetricFlow is core to everything we do so there will be tons of active development from our team.

awinter-py|3 years ago

love this as an area of innovation

my wishlist item for 'standard metrics definitions' is for libraries + servers to ship with a spec of what they export

so that if I'm using, for example, a plugin for a reverse proxy, or a twilio verification library, it can install its own metrics and alerts in my dashboard system

moltar|3 years ago

How does it compare and plans to compete with dbt metrics?

pstoll|3 years ago

Quick scan of dbt metrics - looks like it lets you define a flat, single view of a metric with a predefined set of “filters” (Ie sql predicates). Also it only allows a metric to come from a single table; “joins” are a TODO.

It is missing the runtime engine to “interpret” the semantic model and so arbitrary but correct joins among tables and generate the appropriate SQl to realize those arbitrary joins.

If anyone know dbt better and also knows semantic models, please correct.