top | item 32532313

Supabase Vault

198 points| traviskuhl | 3 years ago |supabase.com | reply

70 comments

order
[+] zaroth|3 years ago|reply
One thing I think missing from this write-up is to walk through how the Restore process will work with encrypted data under pgsodium.

Namely what will happen when you first restore some data into a new Postgres instance which booted with its own randomly generated root key (the wrong key) and then how you are supposed to patch in the correct key and be able to start reading secrets again?

Also, how does the decrypted view look if you try to read it with the wrong key loaded?

Do you have to worry about a race condition where you boot an instance with some encrypted data but forget to put the key file in place, and then end up with a new random key, saving some new data, and now you have a mix of rows encrypted with two different keys? Or will the whole subsystem block if there’s data stored that can’t be decrypted with the resident key?

[+] michelpp|3 years ago|reply
> Namely what will happen when you first restore some data into a new Postgres instance which booted with its own randomly generated root key (the wrong key) and then how you are supposed to patch in the correct key and be able to start reading secrets again?

We restore you're original key into new projects. There is also WIP on accessing the key through the API and CLI.

> Also, how does the decrypted view look if you try to read it with the wrong key loaded?

The decryption will fail (pgsodium will thrown an error).

> Do you have to worry about a race condition where you boot an instance with some encrypted data but forget to put the key file in place, and then end up with a new random key, saving some new data, and now you have a mix of rows encrypted with two different keys? Or will the whole subsystem block if there’s data stored that can’t be decrypted with the resident key?

There's no race in the system, your key is put in place by us before the server boots.

Thanks for the feedback! I'll put some more thought into your question about authenticating a key is the original before you use it.

[+] brap|3 years ago|reply
I’m really impressed with everything Supabase does, but… They market themselves as the “open source alternative to Firebase”. Which is great, mainly because you don’t have to worry about vendor lock-in (to an extent).

Yet one of the main selling points of Firebase (at least in my humble opinion) is that you don’t have to concern yourself at all with implementation details and stuff like that. The learning curve is small, you get a database without having to think about databases.

Yet everything I read about Supabase is heavily centered around Postgres, it seems like you really need to know the ins and outs of the database. I wouldn’t really feel comfortable adopting Supabase without taking a class in Postgres first.

I’m wondering if Supabase plans to stay “low level” or give a higher level of abstraction to those who want it.

Edit: just want to clarify, I’m not saying “sql bad”, I’m saying there’s a not-so-small market (mostly beginners) who would see this as a big adoption barrier, which I think is understandable. I don’t know if Supabase wants to (or even should) cater to both markets.

[+] bastawhiz|3 years ago|reply
My experience is that Firebase requires you to understand the ins and outs of Firebase, which has no real equivalent. Firebase is notorious for pathological cases and performance cliffs and other "gotcha"s; it isn't magic. Knowing what's going to perform poorly or become unmaintainable or otherwise cause problem requires you to have either prior knowledge or done something wrong and learned the hard way. At least with Supabase, if you know about Postgres, you can bring that knowledge with you.
[+] aidos|3 years ago|reply
It’s funny reading that comment from the other side of the fence. I’ve not looked closely at Supabase so I have no real opinion on it, but hearing someone say that you need to know Postgres to work with it is reassuring to me.

Edit: don’t take that as a criticism, just more of an observation that there’s a target audience for which is probably hits a sweet spot.

[+] babbledabbler|3 years ago|reply
This is true but for me the transparent abstraction over Postgres is actually a big plus though I can see that people who don't know postgres or SQL would be a little intimidated. I will say that postgres is the best SQL DB I've worked with and has become my goto.

In my experience there's no free lunch when it comes to high level abstraction over complicated systems. Also, having the option to draw upon the mountain of docs and info on the net about Postgres is nice to have in your back pocket. Of course the tradeoff is that you need to know SQL but I think that's a fair tradeoff.

I would like to see some more improvements over supabase js client api, but I hope they don't hide the fact that there's a relational DB under the hood and allow advanced access to the underlying postgres API.

I could see them making a nosql supabase over something like a mongo type DB like AWS does with document DB or even postgres jsonb fields. That would be nice feature. You could probably get a lot of mileage out of postgres JSONB fields.

I haven't used firebase much except for toying around with it but I think it's certainly a good option for simple nosql db for simplicity and speed of ramping up. Only thing with Firebase is that the cost is prohibitive at larger scale and you're going to be coupled to then when you get to that point so it could come as a rude awakening when your app starts to get a lot of users.

[+] cpursley|3 years ago|reply
You just learn Postgres/SQL as you go. And I've gotten much better at it (schema design, functions, querying) after adopting Hasura (similar idea as Supabase). It's an investment that will pay off for any developer and will outlast whatever cool framework of the month.

But yeah, there's room for more higher level abstractions on top SQL databases. Metabase actually has a nice UI for building queries. Maybe something like this would be useful in Supabase: https://www.metabase.com/docs/latest/questions/query-builder...

[+] justsomeuser|3 years ago|reply
With Firebase you have a team managing the service uptime.

When I last checked, Supabase is a group of processes that you manage yourself.

This means that:

- A. If something goes wrong or you need to customise something, it would be quite complex to fix as you have all these different processes and code bases to understand. The sum of depended-on lines of code for all the open source code bases in Supabase would be massive.

- B. You are tightly locked in. Once you code against the Supabase API's you will not be able to move your app off of it. Other API's lock you in too, but because Supabase does so many things you would need to replace a lot of functionality all at once to move away.

[+] kabirgoel|3 years ago|reply
Agreed. I used Supabase for a fairly simple project and felt like I had to know a lot about Postgres to implement anything. If you’re building something yourself, I feel like Firebase is still the safer bet. I’m guessing Supabase really shines when you’re building a startup or have a team.
[+] mavelikara|3 years ago|reply
What are good resources to learn the "ins and outs" of Postgres?
[+] jononomo|3 years ago|reply
In my humble opinion, if you're a software engineer in the modern world, then learning Postgres is about as fundamental to your job as learning to dribble would be to a job as an NBA basketball player. It is the just the foundation of almost everything else.
[+] jackconsidine|3 years ago|reply
I'm so excited for Supabase. As soon as they move Realtime Subscriptions out of alpha / beta, I will replace Firebase on all new projects. The Firebase / Firestore analog - Snapshot Listeners - give your application a real-time backend for free and simplifies state management drastically since your subscriptions are your store.

Supabase being built on SQL is interesting to me- I love PSQL and the row-level security rules are incredible. But the historical SQL v NoSQL debate involves the trade-offs of Consistency, Availability, and Partition Tolerance [0]. With Firebase (and typically NoSQL) you lose Consistency and you get a bit of redundance by virtue of using onWrite listeners as opposed to Joins. That model scales really well since it's amenable to sharding seamlessly. What will scaling a Supabase backend look like?

[0] https://www.bmc.com/blogs/cap-theorem/

[+] nicoburns|3 years ago|reply
Hmm... I feel like secrets are the one thing I don't want to be in Postgres... because I want to store my Postgres credentials in the secrets vault! And I certainly don't want to have to update the configuration for every service which accesses my secrets vault every time I upgrade my Postgres database (and the access URL changes).

IMO nobody's doing secret management for small companies / products particularly well, so there's definitely a niche to be filled here. But I'm not quite convinced this is it...

[+] michelpp|3 years ago|reply
> Hmm... I feel like secrets are the one thing I don't want to be in Postgres... because I want to store my Postgres credentials in the secrets vault! And I certainly don't want to have to update the configuration for every service which accesses my secrets vault every time I upgrade my Postgres database (and the access URL changes).

Password storage is a somewhat different problem, if you're checking passwords, you just need to know it's authentic, not the actual password itself, so it's common to use hashing and salting techniques for this (pgsodium exposes all of the libsodium password and short hashing functions if you want to dig further) your best bet here is to use SASL with SCRAM auth for postgres

https://www.postgresql.org/docs/current/sasl-authentication....

Secret storage is more about encrypting and authenticating data that is useful for you to know the value of. For example you need the actual credit card number to process a payment (waves hand, this is a broad subject, and some payment flows do not require the knowledge of CCN) but you want to make sure that number is stored encrypted on disk and in database dumps. That's the use case the vault is hitting.

We also have some upcoming support for external keys that are stored encrypted, so for example you can store your Stripe webhook signing key encrypted in pgsodium and reference it by key id that can be passed to `pgsodium.crypto_auth_hmacsha256_verify()` to validate a webhook callback instead of the raw key itself.

[+] hackandtrip|3 years ago|reply
Ideally, you could have a Postgres instance specifically dedicated for secrets - I don't see why you should couple sensitive and non-sensitive data. Many OSS services like HashiCorp Vault just do that: you give Vault a backend (which can be a Postgre DB, just like the one Supabase is offering) and it's gonna use that to save the secrets.

You could then use (e.g.) OpenID to connect to the specific instance of Supabase with those secrets from your application

[+] chucky_z|3 years ago|reply
Hashicorp Vault is always my goto even for small companies. It seems too much but it’s really not. A single instance is scalable enough to handle quite a bit of traffic.

Another good alternative if you need something more SAASy is the 1pass API product

[+] byteshock|3 years ago|reply
I’m confused on why secret management considered secure. Maybe I’m missing something.

Why is letting a third party managed your secrets is secure? So if that third party gets compromised, they now have access to all your secrets. Amazon or other company employees can also view your secrets.

If your server gets compromised, the secrets that are accessible via that server are also compromised. Isn’t that the same impact as just keeping the secrets on your server? Maybe worse if your permissions are broad. You’re merely adding an extra step to get the secret from your secret management.

[+] yomkippur|3 years ago|reply
exactly, we would stick to AWS Secrets
[+] tmd83|3 years ago|reply
What I don't understand (perhaps I haven't found the right docs to read) is how to safeguard the secret if a client machine of the secret is compromised. Say I have a web server that's connecting to the database and the database credential are stored in some separate value. If someone get's access to the web server machine can they not access the value from there?
[+] freeqaz|3 years ago|reply
So I've actually spent about a year of my life working to solve this exact problem. Specifically: How do you prevent a single point of failure from leaking everything sensitive in a database.

It turns out that it's a pain in the rear, but it's possible. You can read through the docs about the design on the site[0].

The parts that I haven't implemented yet, and that limit it's utility in production, are around searching the encrypted data (requires a second vault using asymmetric encryption) and some more in-depth disaster recovery (secure token recovery).

Here is a link to the GitHub[1] for it all.

0: https://www.lunasec.io/docs/pages/lunadefend/overview/introd...

1: https://github.com/lunasec-io/lunasec/tree/master/lunadefend

[+] michelpp|3 years ago|reply
If you give a database client access to the decrypted secrets, then they have them. What the client will not have access to is the hidden root key that is not accessible to SQL that pgsodium uses to encrypt and decrypt data.
[+] vbezhenar|3 years ago|reply
Is there any solutions for postgres database encryption at rest (other than using OS-level encryption)?
[+] michelpp|3 years ago|reply
The Supabase Vault is encryption at rest, the column is stored encrypted in the database, WAL streams and backup dumps. This is usually more efficient than dealing with full disk encryption, and it allows you to control who sees decrypted data on a role-by-role basis using normal Postgres security GRANTs.

With Full Disk Encryption you also only get encryption to that one disk, if you are doing WAL shipping, the disk you are storing the db on may be encrypted, but the WAL files you ship will not be, so you have to make sure those files are encrypted through a full chain-of-custody. With the Vault the data starts off encrypted before going into the WAL stream. Downstream consumers would need to also acquire the hidden root key to decrypt it. We're working on making that process seamless but also secure.

[+] wizwit999|3 years ago|reply
Why put everything in your database?
[+] kiwicopple|3 years ago|reply
All data goes in _a_ database, we’re just providing an extension in case you put sensitive data in your own. Developers often store sensitive data, this extension ensures that it’s encrypted at rest so that it doesn’t leak to logs and backups.

Specifically for Supabase customers, we have another extension called pg_net, which can send database changes to external systems asynchronously (called “database webhooks”). One of these systems could be, for example, AWS Lambda, but to do that we will need a Lambda execution key. Vault allows users to safely store this key inside their database, and because it’s co-located with the data the payload can be sent immediately via a trigger (and end-to-end encrypted).

Vault will expose a lot of libsodium functions that are useful to developers - encrypting columns, end-to-end encryption, multi-party encryption for things like chat apps, etc

[+] throwgawag1|3 years ago|reply
> Vault is a thin usability-layer on top of pgsodium.

Cloudflare and Duck Duck Go also add a bunch of names to routine things that already exist. It's better to just not name it.

[+] mahmoudimus|3 years ago|reply
Sorry, can you help clarify your comment? Do you mean that it's better to not call this "Supabase Vault" and just say "Secrets Management available in Supabase" ?