(no title)
weliveindetail | 1 year ago
Actually, it's implemented in LLVM for years :) https://github.com/llvm/llvm-project/commit/a98546ebcd2a692e...
weliveindetail | 1 year ago
Actually, it's implemented in LLVM for years :) https://github.com/llvm/llvm-project/commit/a98546ebcd2a692e...
pinaraf|1 year ago
weliveindetail|1 year ago
SigmundA|1 year ago
Plans themselves suffer from this since they are in memory data structures not designed to be shared amongst different processes.
DB's like MSSQL don't have this issue since they use a single process with threads which is also why it can handle more concurrent connections without an external pooler. Although MSSQL can also serialize plans to a non process specific representation and store them in the DB for things like plan locking.
hans_castorp|1 year ago
Oracle uses a process-per-connection model as well (at least on Linux), and they are able to share execution plans across connections. They put all the plans into the "global" shared memory.
hinkley|1 year ago
Turned out 9i could only run queries that currently resided in the query cache, and some idiot (who was now my boss) had fucked up our query builder code so that we were getting too many unique queries. Not enough bind variables.
So it’s clear Oracle was using a shared cache back then, but like other people here, I’m scratching my head how this would work with Postgres’s flavor of MVCC. Maybe share query plans when the transaction completes?
I feel like that would get you 90% of the way but with some head of queue nastiness.
asah|1 year ago
can plans be shared ? I might be mistaken but I thought each worker backend can (e.g.) be assigned to a different partition in a partitioned table, with very different table indexes, statistics, etc.