top | item 43627960

(no title)

aobdev | 10 months ago

This is not my area of expertise so take this with a grain of salt, but my initial instinct was to question why you would need to store the tsvector both in the table and in the index (because the tsvector values will in fact be stored losslessly in a GIN index).

The PG docs make it clear that this only affects row rechecks, so this would only affect performance on matching rows when you need to verify information not stored in the index, e.g. queries with weighted text or queries against a lossy GiST index. It's going to be use-case dependent but I would check if your queries need this before using up the additional disk space.

discuss

order

sgarland|10 months ago

If only Postgres had Virtual Generated Columns. Not being snarky; MySQL has had them for ages, and they are a perfect fit for this: takes up essentially zero disk space, but you can index it (which is of course stored).

It is, in my mind, the single biggest remaining advantage MySQL has. I used to say that MySQL’s (really, InnoDB) clustering index was its superpower when yielded correctly, but I’ve done some recent benchmarks, and even when designing schema to exploit a clustered index, Postgres was able to keep up in performance.

EDIT: the other thing MySQL does much better than Postgres is “just working” for people who are neither familiar with nor wish to learn RDBMS care and feeding. Contrary to what the hyperscalers will tell you, DBs are special snowflakes, they have a million knobs to turn, and they require you to know what you’re doing to some extent. Postgres especially has the problem of table bloat and txid buildup from its MVCC implementation, combined with inadequate autovacuum. I feel like the docs should scream at you to tune your autovacuum settings on a per-table basis once you get to a certain scale (not even that big; a few hundred GB on a write-heavy table will do). MySQL does not have this problem, and will happily go years on stock settings without really needing much from you. It won’t run optimally, but it’ll run. I wouldn’t say the same about Postgres.

charettes|10 months ago

Virtual generated columns are not required to allow an index to be used in this case without incurring the cost of materializing `to_tsvector('english', message)`. Postgres supports indexing expressions and the query planner is smart enough to identify candidate on exact matches.

I'm not sure why the author doesn't use them but it's clearly pointed out in the documentation (https://www.postgresql.org/docs/current/textsearch-tables.ht...).

In other words, I believe they didn't need a `message_tsvector` column and creating an index of the form

  CREATE INDEX idx_gin_logs_message_tsvector
  ON benchmark_logs USING GIN (to_tsvector('english', message))
  WITH (fastupdate = off);
would have allowed queries of the form

  WHERE to_tsvector('english', message) @@ to_tsquery('english', 'research')
to use the `idx_gin_logs_message_tsvector` index without materializing `to_tsvector('english', message)` on disk outside of the index.

Here's a fiddle supporting it https://dbfiddle.uk/aSFjXJWz

CodesInChaos|10 months ago

I hope OrioleDB will succeed in replacing Postgres' high maintenance storage engine with something that just works.

mastax|10 months ago

MySQL logical replication isn’t quite foolproof but it’s vastly easier than anything PostgreSQL offers out of the box. (I hope I’m wrong!)

brightball|10 months ago

I mean, technically any database with triggers can have generated columns, but PostgreSQL has had generated columns since version 13. Current version is 17.

https://www.postgresql.org/docs/current/ddl-generated-column...

I can’t think of any advantage of a virtual generated column over a generated column for something like a search index where calculating on read would be very slow.

Postgres has been able to create indexes based on the output of functions forever though, which does the job here too.

senorrib|10 months ago

That’s just syntax sugar for a trigger. Not really a big advantage.

codesnik|10 months ago

It's not only an additional disk space, it also a need to sync it with main column, using triggers or whatever, and have much bigger backups. Why "initial instinct was to question", though? I don't see downsides still, unless yeah, weighted queries or need to search in joined tables etc.

aobdev|10 months ago

It's a generated column, so there's no overhead to keep it up to date, but all generated columns in PG are stored. The corpus for text search will be stored 1x in the source column, 1x as a tsvector in the index, and an additional 1x in the generated column if you do so. That's a 50% increase in disk space.

cryptonector|10 months ago

That is definitely an issue. And that seems like a win for pg_search. And as siblings note PG 18 will have virtual generated, indexable columns, so that advantage for pg_search will go away.