That's a huge wall of text to digest every time the report is run. It would be nice if there was a diff mode that could summarize what changed between two runs.
We have been using postgres-checkup for quite a while. Indeed, we have standardized on it as the basis on which we elaborate the health check and performance report analysis that we provide to our support customers.
It is a great project, constantly improving. Keep up with the work!
This seems like a great idea. I used to have a collection of magic Postgres queries that would give me metrics for optimizing data models and indices. I’m surprised there isn’t a visual tool you can bolt into your Postgres install and get valuable metrics out.
The footprint is very minimalistic. I have 15+ years of Postgres DBA experience, and what this tool does basically is what I usually do myself with performing Postgres health checks under heavy load. But in automated fashion :)
We very carefully choose approaches and queries that we run on production servers. And it is used under heavy loads (dozens of thousands of TPS) daily.
There are certain places that can be heavy. For example, if you have 1 million indexes (yes, it happens, sometimes), the SELECT query for bloat analysis will be slow. Actually, with default settings, the tool limits itself setting `statement_timeout = '30s'` (can be adjusted using CLI option `--statement-timeout`). So, in databases with a huge number of objects, you should expect that F004 and F005 reports will be missing.
[+] [-] craigg|6 years ago|reply
samokhvalov helped us to get this set up
[+] [-] sciurus|6 years ago|reply
[+] [-] evadne|6 years ago|reply
[+] [-] whycombagator|6 years ago|reply
[+] [-] matthewaveryusa|6 years ago|reply
Would love to hear about this use case out of curiosity
[+] [-] tomnipotent|6 years ago|reply
[+] [-] KoenDG|6 years ago|reply
In how many databases? Spread across how many machines? Using hardware that is how old?
It may well be a good project, but statements like that aren't inspiring of confidence.
[+] [-] ahachete|6 years ago|reply
It is a great project, constantly improving. Keep up with the work!
[+] [-] nbrempel|6 years ago|reply
[+] [-] 1996|6 years ago|reply
[+] [-] samokhvalov|6 years ago|reply
The footprint is very minimalistic. I have 15+ years of Postgres DBA experience, and what this tool does basically is what I usually do myself with performing Postgres health checks under heavy load. But in automated fashion :)
We very carefully choose approaches and queries that we run on production servers. And it is used under heavy loads (dozens of thousands of TPS) daily.
There are certain places that can be heavy. For example, if you have 1 million indexes (yes, it happens, sometimes), the SELECT query for bloat analysis will be slow. Actually, with default settings, the tool limits itself setting `statement_timeout = '30s'` (can be adjusted using CLI option `--statement-timeout`). So, in databases with a huge number of objects, you should expect that F004 and F005 reports will be missing.
[+] [-] zlepper|6 years ago|reply