(no title)
throwaway_bad | 6 years ago
Instead we have a bajillion layers of CRUD all in slightly different protocols just to do the same read or write to the database.
throwaway_bad | 6 years ago
Instead we have a bajillion layers of CRUD all in slightly different protocols just to do the same read or write to the database.
koolba|6 years ago
This breaks down quickly once you have data that could become private or mutate rather than append.
tehbeard|6 years ago
The "one db per user" model for private data made using other features like views etc more difficult when you have to upgrade,edit,remove them.
Mutability wasn't really a problem, either present the conflicts to user and pick one or write code to merge if possible.
rockwotj|6 years ago
megous|6 years ago
liuliu|6 years ago
Take a look at GraphQL, its central promise is to let client choose what's the optimal data it needs (that often denormalized through nested GraphQL queries), and send it in one batch.
It is not to say there shouldn't be a simple replica. It is just if we want it to be a simple replica, we should have a server-side mirrored some-what-denormalized representation rather than just the raw server-data models.
dustingetz|6 years ago
If immutable fact/datom streams with idealized cache infrastructure becomes a thing (and architecturally i hope it does) it's going to need DRM to be accepted by both users and businesses.
gwbas1c|6 years ago
As soon as your server state is larger than whatever your client can handle, the whole metaphor breaks down.
ralusek|6 years ago
gnur|6 years ago
dsun180|6 years ago
sergiotapia|6 years ago
vlasky|6 years ago
In 2015, my business implemented a Meteor-based real-time vehicle tracking app utilising Blaze, Iron Router, DDP, Pub/Sub
Our Meteor app runs 24hrs/day and handles hundreds of drivers tracking in every few seconds whilst publishing real-time updates and reports to many connected clients. Yes, this means Pub/Sub and DDP.
This is easily being handled by a single Node.js process on a commodity Linux server consuming a fraction of a single core’s available CPU power during peak periods, using only several hundred megabytes of RAM.
How was this achieved?
We chose to use Meteor with MySQL instead of MongoDB. When using the Meteor MySQL package, reactivity is triggered by the MySQL binary log instead of the MongoDB oplog. The MySQL package provides finer-grained control over reactivity by allowing you to provide your own custom trigger functions.
Accordingly, we put a lot of thought into our MySQL schema design and coded our custom trigger functions to be selective as possible to prevent SQL queries from being needlessly executed and wasting CPU, IO and network bandwidth by publishing redundant updates to the client.
In terms of scalability in general, are we limited to a single Node.js process? Absolutely not - we use Nginx to terminate the connection from the client and spread the load across multiple Node.js processes. Similarly, MySQL master-slave replication allows us to spread the load across a cluster of servers.
For those using MongoDB, a Meteor package named RedisOplog provides improved scalability with the assistance of Redis's pub/sub functionality.
mch82|6 years ago
PouchDB is mentioned a couple of times, including one “PouchDB compatible” mention. Wondering what unique use cases RxDB supports?
dtech|6 years ago
skrebbel|6 years ago
I mean, that's why it didn't catch on right :-) It's hard :-)
throwaway_bad|6 years ago
The "shard" could just be that users own feed in this case. Then you get offline for free where user adds a tweet and it appears immediately, replicating back to server when he goes back online. The server replica side will need to be a lot more complicated to deal with broadcasting but I don't see why it won't work.
bayesian_horse|6 years ago
If I had to make a Twitter clone with CouchDB, I would probably have one timeline document per user, and maybe one per day to limit the syncing bandwidth.
azr79|6 years ago