This works until you have an A->B->C hierarchy of entities and want to monitor the subtree rooted at a specific A. Then the WAL for table C, even with REPLICA IDENTITY FULL, will not tell you whether a change is relevant. At that point you need to do a JOIN anyways, so might as well just use a trigger and NOTIFY instead.
Two of the other reasons for this over triggers are also misleading:
- Setting up triggers can be automated easily.
- True, you only use 1 connection to the database, but you now operate this app. You could also run pgbouncer.
This reminds me of a project I did a few years ago:
"Stockify is a live music service that follows the mood of the stock markets. It plays happy music only when the stock is going up and sad music when it’s going down."
https://vimeo.com/310372406
(it's of course a parody, but I made a functional prototype)
(But: I think live db-activity rendered to music as a kind of monitoring mechanism is potentially way cooler than just getting db updates over a web socket.)
For 2 you could look at meteor, which does a similar thing with mongo. It's not relational, but with supabasr you also don't really get the advantages of a relational db, since the update events are about single tables.
I'm really curious to see how this ends up doing vs. Hasura.
Postgrest (which is basically what Supabase is, obviously with other value-adds) or Hasura which basically exposes a GraphQL server that interfaces with Postgres.
Personally I prefer GraphQL as there's more tooling around that compared to Postgrest but it's interesting to see. In this case if supabase was GraphQL you could just use a subscription.
I'd be curious to know why supabase didn't go with GraphQL.
We're not opposed to GraphQL at all. We use PostgREST because it's very low-level. It's a thin wrapper around Postgres. It uses Row Level Security, works nicely with Stored Procedures, and postgres extensions (like PostGIS).
Philosophically we want to focus more on the database-centric solutions, rather than than middleware-centric solutions. This means a slower time-to-market, but it's beneficial because you have a "source" right at the bottom of your stack, and you can connect whatever tools you want to it (including Hasura!)
> I'd be curious to know why supabase didn't go with GraphQL.
I'm guessing because GraphQL is very limited compared to SQL.
From reading their docs [1] it seems they have a JS API for SQL with support for nested data a la GraphQL. Not sure if its their own or comes from some other library.
So one big difference is this tails the bin log (as a replication slave), while Hasura polls. Hasura makes that polling efficient by multiplexing the polling queries transparently so they send one query to poll N queries on the same table with the same shape.
But once you're polling a large percentage of your whole data set, the bin log approach has a clear advantage.
I love it!
Only issue is if you can't consume the history of changes.
But this is good for caching materialized view in the client
1- subscribe to event and buffer event
2- run SQL query for your materialized view
3- apply all buffered event and new event to incrementally update the materialized view
this way the (slow/expensive) query of the materialized view don't need to be run periodically and your cache always is always fresh without need to set TTL.
If you websocket connection get disconnected, drop the materialized view and repeat step #1.
They use websockets and a set of plug-ins to support redis, rabbitmq, or many other things. However, Apollo is planning to remove subscriptions for the next major release.
Neat! Is the idea to subscribe to the websocket directly from frontend code or from an application server?
It sounds like it's the latter, which seems like an unusual use case for websockets. (or at least I can't think of anything else that does ws for a server-to-server API)
As far as getting cheap framing and bidirectional communication over an HTTP port goes, there's nothing to stop you using websocket server->server. For example users, many crypto exchanges publish their feed this way
For me, this looks like the perfect implementation of a very scalable config service, where the read RPS can get very high, but the config values are rarely changed. The consumers of the config service can be either frontend applications or backend application, thus websocket is a great choice.
Yeah! Front end apps can skip the server and get data from the db directly. There's OAuth2 setup and you can authorize users to have crud access to tables, or even a granular as rows.
Sure can! I contributed on this project a few months ago. Hoping to use it soon on a project.
I implemented a portion of crud with a vuejs app. It's a pretty neat setup. Just authorize with OAuth2 and then if you have permissions youre good to go!
Do you know if there is something similar for SQLite? I've been looking for a while and wonder if maybe listening to the WAL is the best option. There are hooks to tie into but the database I'm looking at isn't mine (it's local on my machine for another application) so idk if I should be futzing around with it.
Looks to me like it uses the same trick as Debezium (and similar projects) - listening to the replication stream and turning it into usable information - but it's a totally separate implementation built on top of Elixir: https://github.com/supabase/realtime/blob/master/server/lib/...
For a project that small you can use NoSQL without issue because you can always scan the whole thing when needed. For larger data sets you want the relational model unless you're very sure you don't need it (and most people who think they're in this category are just wrong and will find out later the hard way.)
This is something I have an immediate need for (and was about to build myself) to use with a PostgreSQL/TimescaleDB instance. Is it possible to have new subscriptions get up to an hour of historical data before streaming starts, or even better, to supply a parameter to receive historical data from a timestamp?
[+] [-] JakaJancar|4 years ago|reply
Two of the other reasons for this over triggers are also misleading:
- Setting up triggers can be automated easily.
- True, you only use 1 connection to the database, but you now operate this app. You could also run pgbouncer.
[+] [-] skyde|4 years ago|reply
This is how I design all my table schema. and it make database partitioning easier too.
[+] [-] karambahh|4 years ago|reply
The actual purpose is actually way cooler, looks like a great tool
[+] [-] yokto|4 years ago|reply
"Stockify is a live music service that follows the mood of the stock markets. It plays happy music only when the stock is going up and sad music when it’s going down." https://vimeo.com/310372406
(it's of course a parody, but I made a functional prototype)
[+] [-] jmull|4 years ago|reply
I guess someone needs to do it now, hm...
(But: I think live db-activity rendered to music as a kind of monitoring mechanism is potentially way cooler than just getting db updates over a web socket.)
[+] [-] rzzzt|4 years ago|reply
[+] [-] m_mueller|4 years ago|reply
[+] [-] mark-wagner|4 years ago|reply
Listen to packets on /dev/audio (warning: can be noisy).
https://docs.oracle.com/cd/E23823_01/html/816-5166/snoop-1m....
[+] [-] domoritz|4 years ago|reply
[+] [-] brap|4 years ago|reply
[+] [-] pier25|4 years ago|reply
With Fauna you still get vendor lock-in though.
[+] [-] kabes|4 years ago|reply
[+] [-] endisneigh|4 years ago|reply
Postgrest (which is basically what Supabase is, obviously with other value-adds) or Hasura which basically exposes a GraphQL server that interfaces with Postgres.
Personally I prefer GraphQL as there's more tooling around that compared to Postgrest but it's interesting to see. In this case if supabase was GraphQL you could just use a subscription.
I'd be curious to know why supabase didn't go with GraphQL.
[+] [-] michael_j_ward|4 years ago|reply
[0] https://www.graphile.org/postgraphile/
[+] [-] kiwicopple|4 years ago|reply
We're not opposed to GraphQL at all. We use PostgREST because it's very low-level. It's a thin wrapper around Postgres. It uses Row Level Security, works nicely with Stored Procedures, and postgres extensions (like PostGIS).
Philosophically we want to focus more on the database-centric solutions, rather than than middleware-centric solutions. This means a slower time-to-market, but it's beneficial because you have a "source" right at the bottom of your stack, and you can connect whatever tools you want to it (including Hasura!)
[+] [-] pier25|4 years ago|reply
I'm guessing because GraphQL is very limited compared to SQL.
From reading their docs [1] it seems they have a JS API for SQL with support for nested data a la GraphQL. Not sure if its their own or comes from some other library.
[1] https://supabase.io/docs/reference/javascript/select
[+] [-] iooi|4 years ago|reply
[+] [-] eloff|4 years ago|reply
But once you're polling a large percentage of your whole data set, the bin log approach has a clear advantage.
[+] [-] tablatom|4 years ago|reply
https://nhost.io/
[+] [-] thom|4 years ago|reply
[+] [-] skyde|4 years ago|reply
1- subscribe to event and buffer event
2- run SQL query for your materialized view
3- apply all buffered event and new event to incrementally update the materialized view
this way the (slow/expensive) query of the materialized view don't need to be run periodically and your cache always is always fresh without need to set TTL.
If you websocket connection get disconnected, drop the materialized view and repeat step #1.
[+] [-] raarts|4 years ago|reply
[+] [-] speedgoose|4 years ago|reply
[+] [-] endisneigh|4 years ago|reply
[+] [-] bcoates|4 years ago|reply
It sounds like it's the latter, which seems like an unusual use case for websockets. (or at least I can't think of anything else that does ws for a server-to-server API)
[+] [-] cookguyruffles|4 years ago|reply
[+] [-] throwdbaaway|4 years ago|reply
[+] [-] jordan801|4 years ago|reply
[+] [-] kitd|4 years ago|reply
Can you go in the other direction, ie push via WS into a table? Obviously you'd need the correct auth.
I had an idea of doing something similar with MQTT. Data at rest and data in motion in 1 (or maybe 1.5) components.
[+] [-] jordan801|4 years ago|reply
I implemented a portion of crud with a vuejs app. It's a pretty neat setup. Just authorize with OAuth2 and then if you have permissions youre good to go!
[+] [-] edoceo|4 years ago|reply
[+] [-] tyre|4 years ago|reply
[+] [-] WJW|4 years ago|reply
[+] [-] pihentagy|4 years ago|reply
[+] [-] simonw|4 years ago|reply
[+] [-] yawnxyz|4 years ago|reply
Which one would you recommend for small-ish (~100 user, <10k records) projects?
[+] [-] eloff|4 years ago|reply
[+] [-] gregw2|4 years ago|reply
Using this tech or something more primitive?
I want to get some end user feedback on predictive model outputs...
[+] [-] anaganisk|4 years ago|reply
[+] [-] tarun_anand|4 years ago|reply
[+] [-] hmsimha|4 years ago|reply
[+] [-] sk5t|4 years ago|reply
[+] [-] sscarduzio|4 years ago|reply
[+] [-] cglace|4 years ago|reply