top | item 27317024

PostgreSQL EXPLAIN Output Explained

222 points| PhilipTrauner | 4 years ago |cybertec-postgresql.com | reply

28 comments

order
[+] munk-a|4 years ago|reply
The article touched on some caveats but missed what I think is a big one - you really want to capture any detailed explains from environments as close to production as possible. Different table statistics can cause the planner to go in wildly different directions and while faster is always better it is very easy to accidentally get caught up trying to sink a lot of effort into making a query more performant that was running slow due to thrashing in ram on a dev-box.

Explain (analyze at least - which you should always use) is a lot less theoretical than you might assume. That can make it a bit more onerous to execute but it ends up adding a lot of value to the statistics when you gather them.

Oh also - query caching on postgres is a thing so if you're worried about performance from a cold state don't forget to clear caches before executing. And if anyone has any good suggestions around tools to screw up table statistics I haven't found a good one that I like yet.

[+] samokhvalov|4 years ago|reply
> Different table statistics can cause the planner to go in wildly different directions

Exactly. That's why my team and I (Postgres.ai) have developed Database Lab Engine [1] and a chatops tool for SQL optimization, Joe bot [2], both are open-source (AGPLv3).

EXPLAIN (ANALYZE, BUFFERS) has to be executed on the same-size DB, with properly adjusted Postgres configuration.

Interesting, that the machine you might using for query plan troubleshooting, can have less RAM and different hardware in general – it doesn't matter for the planner. Even shared_buffers doesn't matter – you can set effective_cache_size matching production (this trick we use in Database Lab when hardware is weaker than on production).

As for the cache states – very good point as well. I'm advocating for buffers- or rows-centric approach: first, optimization should be done to reduce the numbers of buffers or, if you're working with "logical" (dump/restore) copy of the database rather than "physical" (PGDATA copy, keeping the same data layout, including bloat, etc.) – the fewer the numbers, the better. Only then, you pay attention to timing – and keep in mind what can happen under the worst conditions (everything is read from disk), if it makes sense.

[1] https://postgres.ai/products/how-it-works

[2] https://postgres.ai/products/joe

[+] ganomi|4 years ago|reply
To get production EXPLAINS for problematic queries you can activate auto_explain on a postgres instance. For my transactional system i have set it up to log EXPLAINS for all queries that take more than 2000 ms.
[+] comboy|4 years ago|reply
I would say you need it on production environment.

Exact same configuration is not enough. You want shared buffers and disk cache to look the same as it looks on production and you also want the same common queries running in the background.

I mean, "need" in case of a busy database and being at a high optimization level where small details matter. You can catch more obvious stuff with much less care.

[+] mrjaeger|4 years ago|reply
Do you know how to clear the cache inside of a running Postgres instance? All of the articles online say to just restart the db, but that isn’t feasible in some cases I’ve come across, such as when trying to do testing against a remote db spun up to test against more prod like data. Like you said the query perf against a cold cache vs. something that has had a lot of rows loaded into the shared buffed can be quite different!
[+] firloop|4 years ago|reply
Great writeup. I use EXPLAIN a lot in development as a gut check — “does this descending index do what I thought it would? how expensive is that subquery?” Highly recommend looking at it early on, it helps me catch silly mistakes well before production.
[+] ezekg|4 years ago|reply
I use an awesome service called PgMustard [0] for parsing and debugging slow queries. It has saved me a lot of time, and has helped me resolve some pretty big (and complicated) bottlenecks.

[0]: https://pgmustard.com

[+] michristofides|4 years ago|reply
Thanks for the shout out, I’m half the team behind pgMustard, happy to answer questions here if anyone has any
[+] goodpoint|4 years ago|reply
Warning: it's paid and requires sign-up with github/google even for a test.
[+] hermanradtke|4 years ago|reply
An alternative to https://explain.dalibo.com/ is https://tatiyants.com/pev

Both have pros and cons about how they visualize things.

[+] tclancy|4 years ago|reply
Came here to suggest the same. If you hit the gears on the left and choose view: compact and graph: cost you can get a decent overview of the hot spots in complicated queries quickly.
[+] MrOxiMoron|4 years ago|reply
PEV is great, helped me figure out and fix specific query issues that only happened on production
[+] airstrike|4 years ago|reply
I appreciate the first image in TFA is supposed to just be funny but it would actually be useful to have an output like that. Some of those analyses are tougher than others to code but a subset of them are not entirely out of the realm of possibility.
[+] michristofides|4 years ago|reply
I hope we’re not truly a consultants nightmare, but we’ve got quite a few of these covered in pgMustard (15+ tip types) and working to add more.
[+] eyelidlessness|4 years ago|reply
I've found Postgres EXPLAIN output completely unhelpful for as long as I've been using Postgres, and... this article didn't help.

> Find the lowest node where the estimated row count is significantly different from the actual row count.

> ...

> Under the heading “rows x” you see by what factor PostgreSQL overestimated or underestimated the row count. Bad estimates are highlighted with a red background.

Am I missing something? Everything actually shown displays identical row count estimates/actual, and red/yellow/orange associated with accurate estimates. What am I not seeing??

[+] michristofides|4 years ago|reply
You’re quite right, the example given doesn’t have bad row estimates, and other cells are highlighted in red/orange/yellow for different reasons (proportion of time taken, in the case shown).

For an intro to this that goes through several examples, I highly recommend a conference talk[1] by Josh Berkus in 2015/16 that he gave a few times. It has aged pretty well and I’ve not yet seen the basics covered better.

[1]: https://youtu.be/mCwwFAl1pBU

[+] firloop|4 years ago|reply
The numbers can be useful, but I mainly pay attention to the steps the query planner takes. Is it doing an expensive loop over data? Is it using an index? Those can often be more illuminating than the numbers – you get a feel over time for what operations are actually expensive.
[+] efxhoy|4 years ago|reply
One thing I learned about EXPLAIN this week is that it doesn't show constraint checks. I was trying to delete about 40k rows from a table and it was taking hours and I couldn't figure out why. ANALYZE EXPLAIN showed nothing indicating anything about reading any of the other tables than the FROM and the USING table.

The table I was deleting from had 20 foreign key constraints referencing it, and a couple of them didn't have an index on the referencing column and were big (a few million rows). Added indexes to all of them, took a couple of minutes to build, and the DELETE ran in a few seconds.

Sometimes the answer to a performance issue can't be found in EXPLAIN. And always remember to properly index your foreign key constraints.

[+] michristofides|4 years ago|reply
EXPLAIN ANALYZE would have shown you referential integrity (RI) triggers taking most of the time, but it’s still a bit of a leap to work out that it’s due to missing foreign key indexes if you don’t already know
[+] syastrov|4 years ago|reply
Wish I had a tool that could suggest things to do like the “cartoon” at the top of the article.