A gentle reminder that FoundationDB exists and has this nailed down really well. They are just bad at marketing, so it's not in fashion. But do check it out if you want a distributed database with strict serializable semantics, that works.
Strongly consistent FoundationDB = Likely similar write performance to CockroachDB or TiDB when you avoid secondary indexes.
Secondary indexes in "distributed strongly consistent" systems is what ruins performance: because each index is +1 write to another "index table" (... +1 raft quorum check!).
I don't think FoundationDB has "secondary indexes" to begin with, so one may never run into the +1 write per index issue.. it's just a layer on top of TiKV (equivalent to RocksDB in CockroachDB).
Linking to the introduction bypasses the prominent note in the readme:
> RedisRaft is still being developed and is not yet ready for any real production use. Please do not use it for any mission critical purpose at this time.
Why choose this over etcd? Especially if it's a limitation / non-goal to support all Redis commands, or to respond with Redis-like quick performance? Why not go with the battle-hardened (it's the backing datastore in Kubernetes), proven option?
I've been watching this project for a long time. It was supposed to be released with Redis 7[1]. But I guess this is not true anymore. And there is no public roadmap saying when it will be production ready.
I am looking at KeyDB and consider to use it as replacement of Redis. Besides some speed improvements it has good looking replication and cluster solutions. https://docs.keydb.dev/docs/cluster-spec
We thought the same and deployed KeyDB to production as a replacement for big Redis deployment (200+ GB memory) and we ran into many unpleasent issues with it - very high replication latency, instability, random crashes, memory leaks, etc. So I'd advise you to do thorough testing before you use it in production.
It may have improved, but KeyDB has a number of issues for common Redis use cases e.g. if you're using Redis as task queue (typically BRPOP) you'll encounter a race condition in which each KeyDB instance will make a new task available on all nodes for listening workers resulting in duplication of tasks.
I attempted to use KeyDB precisely for its replication and clustering, but was forced to switch to Redis HA. Too many issues getting it to work in a stable way.
Raft is a pretty decent -- not great -- consensus algorithm (IMHO) but it is used because it is easy to understand. If I had to trust one, I would probably go with Multi-Paxos, if you could successfully implement it.
MemoryDB has a single node (primary node) strong consistency.
MemoryDB seems to have a very similar architecture to that of AWS Aurora, which separates a storage layer and compute nodes and consistency is implemented not by communicating between compute nodes but by offering a consistent distributed storage layer. This architecture usually don't have a multi-node strong consistency by itself and can have replicas.
This means that in MemoryDB only the primary node is strongly consistent but the replica nodes don't.
Instead, in my experience, those kinds of AWS offerings have less operational headaches because the storage remains safe even the primary node fails and you don't need to worry about managing distributed nodes.
My understanding of MemoryDB is that it basically replaces the AOF with a distributed log (it might be Kafka/kinesis, but it could just be backed by the same data layer as aurora). The biggest win there is that acknowledged writes are not lost if the writer node dies. A reader can replay the log and get fully caught up as it is promoted during a failover.
This comes at a cost though and writes are slower than traditional redis.
[+] [-] jwr|2 years ago|reply
[+] [-] eternalban|2 years ago|reply
I am guessing maybe it's Apple's corporate secrecy that is the issue. Apple likely has a massive deployment of this tech.
[+] [-] geenat|2 years ago|reply
Secondary indexes in "distributed strongly consistent" systems is what ruins performance: because each index is +1 write to another "index table" (... +1 raft quorum check!).
I don't think FoundationDB has "secondary indexes" to begin with, so one may never run into the +1 write per index issue.. it's just a layer on top of TiKV (equivalent to RocksDB in CockroachDB).
[+] [-] ruuda|2 years ago|reply
[1]: https://aphyr.com/posts/283-jepsen-redis
[+] [-] jabradoodle|2 years ago|reply
Will indeed be interesting to see the analysis once it becomes stable.
[+] [-] danw1979|2 years ago|reply
[+] [-] TheDong|2 years ago|reply
> RedisRaft is still being developed and is not yet ready for any real production use. Please do not use it for any mission critical purpose at this time.
[+] [-] noobdev9000|2 years ago|reply
[+] [-] solatic|2 years ago|reply
[+] [-] kbumsik|2 years ago|reply
[1] https://github.com/etcd-io/etcd/issues/9771
[+] [-] 361994752|2 years ago|reply
[1] https://www.zdnet.com/article/redis-labs-unveils-redis-datab...
[+] [-] bullen|2 years ago|reply
It has been running in a intercontinental production environment with 100% read uptime since 2017.
It's 2000 lines of code: http://root.rupy.se (this test env. has 3 nodes: fem, six and sju)
[+] [-] wyldfire|2 years ago|reply
[+] [-] CyberDildonics|2 years ago|reply
[+] [-] remram|2 years ago|reply
What a weird notation. When N=3, a cluster may lose up to 1 node, I don't know how that matches this formula.
[+] [-] compsciphd|2 years ago|reply
[+] [-] decide1000|2 years ago|reply
[+] [-] Simpliplant|2 years ago|reply
[+] [-] qeternity|2 years ago|reply
[+] [-] ukuina|2 years ago|reply
[+] [-] Cardinal7167|2 years ago|reply
https://decentralizedthoughts.github.io/2020-12-12-raft-live...
[+] [-] eatonphil|2 years ago|reply
[+] [-] he0001|2 years ago|reply
[+] [-] kbumsik|2 years ago|reply
[+] [-] withinboredom|2 years ago|reply
[+] [-] mperham|2 years ago|reply
https://aws.amazon.com/memorydb/features/
[+] [-] kbumsik|2 years ago|reply
MemoryDB seems to have a very similar architecture to that of AWS Aurora, which separates a storage layer and compute nodes and consistency is implemented not by communicating between compute nodes but by offering a consistent distributed storage layer. This architecture usually don't have a multi-node strong consistency by itself and can have replicas.
This means that in MemoryDB only the primary node is strongly consistent but the replica nodes don't.
Instead, in my experience, those kinds of AWS offerings have less operational headaches because the storage remains safe even the primary node fails and you don't need to worry about managing distributed nodes.
Edit: add pros
[+] [-] phamilton|2 years ago|reply
This comes at a cost though and writes are slower than traditional redis.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] geenat|2 years ago|reply
[+] [-] esafak|2 years ago|reply
[+] [-] slondr|2 years ago|reply
[+] [-] vrglvrglvrgl|2 years ago|reply
[deleted]