top | item 46976256

Databases should contain their own Metadata – Use SQL Everywhere

46 points| matheusalmeida | 18 days ago |floedb.ai

33 comments

order

zabzonk|13 days ago

Not clear if the author realises that all commercial SQL database engines support querying of the database's metadata using SQL. Or maybe I have misunderstood - I only skimmed the article.

da_chicken|13 days ago

Yeah, this seemed like a very long way to say, "Our RDBMS has system catalogs," as if it's 1987.

But then, they're also doing JOINs with the USING clause, which seems like one of those things that everybody tries... until they hit one of the several reasons not to use them, and then they go back to the ON clause which is explicit and concrete and works great in all cases.

Personally, I'd like to hear more about the claims made about Snowflake IDs.

whynotmaybe|13 days ago

I don't think it's as easy to do the example in the article just by using information_schema.

> Which tables have a column with the name country where that column has more than two different values

But on their product page, the definition of floesql left me puzzled

> It uses intelligent caching and LLVM-based vectorized execution to deliver the query execution speed your business users expect.

> With its powerful query planner, FloeSQL executes queries with lots of joins and complicated SQL syntax without breaking your budget.

deepsun|13 days ago

Differently though, AFAIR PostgreSQL does most of the schema changes transactionally, but not MySQL.

galaxyLogic|13 days ago

Isn't his like it is in many relational databases, you can query them about the tables in them?

matheusalmeida|13 days ago

The key difference is that it's not just about schema metadata (tables, indexes, views, columns, etc...). PostgreSQL is fabulous regarding this. Even native types are part of the catalog (pg_catalog).

Things are great in your DB... until they aren't. The post is about making observability a first-class citizen. Plans and query execution statistics, for example, queryable using a uniform interface (SQL) without the need to install DB extensions.

tkejser|13 days ago

Hi All

Original author here (no, I am not an LLM).

First, a clarifying point on INFORMATION_SCHEMA. In the post I make it clear that this interface is supported by pretty much every database since the 1980s. Most tools would not exist without them. When you write an article like this - you are trying to hit a broad audience and not everyone knows that there are standards for this.

But, our design goes further and treats all metadata as data. It's joinable, persisted and acts, in every way, like all other data. Of course, some data we cannot allow you to delete - such as that in `sys.session_log` - because it is also an audit trail.

Consider, by contrast, PostgreSQL's `pg_stat_statements`. This is an aggregated, in memory, summary of recent statements. You can get a the high level view, but you cannot get every statement run and how that particular statement deviated from statements like it. You also cannot get the query plan for a statement that ran last week.

To address the obvious question: "Isn't that very expensive to store?"

Not really. Consider a pretty aggressive analytical system (not OLTP) - you get perhaps 1000 queries/sec. The query text is normalised and so is the plan - so the actual query data (runtimes, usernames, skewness, stats about various operators) is in the order of few hundred bytes. Even on a heavily used system, we are talking some double digit GB every day for a very busy system - on cheap Object Storage. Your company web servers store orders of magnitude more data than that in their logs.

With a bit of data rotation - you can keep the aggregates sizes over time manageable.

What stats do we store about queries?

- Rows in each node (count, not the actual row data as that would be a PII problem) - Various runtimes - Metadata about who, when and where (ex: cluster location)

Again, these are tiny amounts of data in the grand schema of things. But somehow our industry accepts that our web servers store all that - but our open source databases don't (this level of detail is not controversial in the old school databases by the way).

Of course, we can go further than just measuring the query plan.

Performance Profiling of workers is a a concept you can talk about - so it is also metadata. Let us say you want to really understand what is going on inside a node in a cluster.

You can do this:

```sql SELECT stack_frame, samples FROM sys.node_trace WHERE node_id = 42 ```

Which returns a 10 second sample (via `perf`) of the process running on one of the cluster node.

(Obviously, that data is emphemeral - we are good at making things fast but we can't make tracing completely free)

Happy to answer all questions

waffletower|13 days ago

"SQL everywhere" is decidedly dystopian. That being said, creating a standard for database introspection could be powerful for agents.

umairnadeem123|13 days ago

[deleted]

zabzonk|13 days ago

> which queries produced which rows

I doubt many real-world applications could tolerate the amount of data/performance degradation this implies. If you need this (and I can't think why you would), then I think writing your own logging code is the answer, rather than lumbering everyone else with it.

da_chicken|13 days ago

That's quite expensive. Most systems that need this sort of data will instead implement some form of audit log or audit table. Which is still quite expensive.

At the record level, I've seldom seen more than an add timestamp, and add user id, a last change timestamp, and a last change user id. Even then, it covers any change to the whole row, not every field. It's still relatively expensive.

ewuhic|13 days ago

AI shit slop

jwneil|12 days ago

def not -know the author and he simply dn need to go there - read more carefully