top | item 37247275

Show HN: I built this Postgres logger

94 points| kvaranasi_ | 2 years ago |rocketgraph.io

Hey HN,

Some of you were really interested in Postgres logging with pgAudit in my previous post here: https://news.ycombinator.com/item?id=37082827

So I built this logger: https://rocketgraph.io/logger-demo

using pgAudit to show you what can be done with Postgres auditing. It offers some powerful features like "get me all the CREATE queries that ran in the past hour". These are generated by AWS RDS Instance running on my Rocketgraph account. Then they are forwarded to Cloudwatch for complex querying. In the future we can connect these logs to slack so you can get slack alerts when a developer accidentally DROPs a table.

If you like my work, please check it out here: https://github.com/RocketsGraphQL/rgraph

And if you want this logging on your own Postgres instance. Use https://rocketgraph.io/ And setup a project. pgAudit is automatically installed.

45 comments

order
[+] vermon|2 years ago|reply
pgAudit is pretty cool, but it's a bit of a hassle for us currently because it keeps logging SELECT FOR UPDATE's even though SELECT is not allowed for the auditor, so if you implement stuff like simple queues with continuous polling it fills up the audit log with junk pretty fast. I opened a PR though so hopefully it gets fixed at some point.
[+] kvaranasi_|2 years ago|reply
Interesting. I did not know that. I have to dig into the source code then. Do you know any alternate logging tools that don't have this issue?
[+] danpalmer|2 years ago|reply
This sounds like a desirable feature. SELECT FOR UPDATE results in a write where SELECT does not, and distinguishing between reads and writes is definitely something I'd want to be able to do here. Unless I've misunderstood the problem – I'm not sure I fully understand your PR.

Perhaps being able to exclude those as a separate category would be good?

[+] e12e|2 years ago|reply
This looks great:)

> Use https://rocketgraph.io/ And setup a project.

What will the price and terms be?

[+] kvaranasi_|2 years ago|reply
1 week free trail. And for HN users I can give more time for the free trail. It'll be $50/mo. But it is worth a lot more.

But hey, you can sign up and create a project without your credit card. Once free trail ends, you can pay with stripe. Let me know how you like it.

[+] bdcravens|2 years ago|reply
This is just a front-end to CloudWatch Log Insights, right?
[+] kvaranasi_|2 years ago|reply
Sort of. I created this page to just show what can be done with pgAudit. But what I do at Rocketgraph is more. I provide Authentication, Postgres and GraphQl console. Postgres logger is an extension for demo purposes. In the future I plan to add slack alerts when a developer DROP's a database accidentally. And much more. Configuring AWS RDS is a pain.

I wrote about it here: https://blog.rocketgraph.io/posts/install-pgaudit

That's why I automated the process. Now every project comes with Postgres and pgAudit configured, Authentication, GraphQL and front-end SDKs right out of the box so it will be easier to develop web applications.

[+] rsyring|2 years ago|reply
> In the future we can connect these logs to slack so you can get slack alerts when a developer accidentally DROPs a table.

What can you do about getting me a slack notice BEFORE a developer accidentally drops a table? ;)

[+] nswanberg|2 years ago|reply
Triboulet: "A noble has threatened to hang me!"

The Monarch: "Don't worry! If he hangs you I'll have him beheaded fifteen minutes later."

Triboulet: "Well, would it be possible to behead him 15 minutes before?

[+] palijer|2 years ago|reply
Haha, exactly, but if you have developers who can drop production tables without the org knowing in advance and reviewing the change, you already have a big problem.

All our db operations like this go through our regular code review process for the db maintainers team, and instead of dropping tables, we just rename them with a prefix "to_be_dropped" as a scream test, then actually drop them a month later or so.

Recovery is much simpler for renaming than dropping.

[+] koliber|2 years ago|reply
If you want to find the developer who will drop your table by accident, ask your devs to raise their hands if they ever dropped a table by accident. Those that don’t raise their hand are the ones you watch carefully.
[+] codegeek|2 years ago|reply
"BEFORE a developer accidentally drops a table?"

Ideally, you don't want developer(s) to have access to destruct data especially things like DROP. Now, a lot of small businesses may not have the resource to have separate DBAs etc but the rule still applies.

This is more a control problem than a code problem. Having said that, you could write triggers to stop DELETE or DROPs from happening if you cannot control the access.

[+] kvaranasi_|2 years ago|reply
Hey wait, you have a point though. We can put our custom locks onto tables that doesn't let anyone delete a particular table. Let me think over it. Saving this comment for later. Thanks for the idea man.
[+] palijer|2 years ago|reply
Haha, exactly, but if you have developers who can drop production tables without the org knowing in advance and reviewing the change, you already have a big problem.
[+] _flux|2 years ago|reply
Well, I guess you could use https://www.postgresql.org/docs/16/sql-createeventtrigger.ht... to trigger on sql_drop

Maybe it could send a Slack message which you (some someone else) would need to react with (I think this site malforms that white checkmark unicode character though..) before it continues, or times out after a while with an error!

[+] lazyant|2 years ago|reply
Like others have said, this is a control problem in prod (not using migrations in code and renaming instead of dropping). Anyways, as an fun exercise to get exactly what's asked, I wonder if this could be used https://tembo.io/blog/introducing-pg-later/ together with a trigger to trap the DROP, alert on Slack and run later if not aborted.
[+] alFReD-NSH|2 years ago|reply
Don't give them the DROP permission and have them go through a (automated) process to elevate their privilege when they need it.
[+] dom_hutton|2 years ago|reply
Modify your schema through migrations that go through code review :p