I had a practical use to learn this: design interview prep.
In the real world some platform team does this (or AWS) and then probably one person in that team for one week implements it. Although good for everyone to understand.
You can use it to ensure a request for a user ends up generally at the same node. If that users workload creates state (even if that is cache) then you get a performance win. By using the same server on each request.
From what I can see in the UI, nodes are placed semi randomly on the ring (probably a hash itself determines the node placement) so don’t you still have the same problem that the hashing of virtual nodes onto the ring can still cause imbalances?
To me it seems like you should be trying to put new nodes on the ring within the largest gaps on the ring.
Vnodes are amazing for so many reasons. The model here is pretty simple, but even still, it means you can rejuggle work without having to re-hash when adding nodes: just have the new node claim some vnodes. That's just the basics.
In Cassandra's consistent hashing & many others, you can also juggle vnodes around between nodes as you please, which, if you have hotspot vnodes, gives you some chance to add some anti- affinity for the hot vnodes.
[+] [-] pyfon|10 months ago|reply
In the real world some platform team does this (or AWS) and then probably one person in that team for one week implements it. Although good for everyone to understand.
You can use it to ensure a request for a user ends up generally at the same node. If that users workload creates state (even if that is cache) then you get a performance win. By using the same server on each request.
[+] [-] __turbobrew__|10 months ago|reply
From what I can see in the UI, nodes are placed semi randomly on the ring (probably a hash itself determines the node placement) so don’t you still have the same problem that the hashing of virtual nodes onto the ring can still cause imbalances?
To me it seems like you should be trying to put new nodes on the ring within the largest gaps on the ring.
[+] [-] jauntywundrkind|10 months ago|reply
In Cassandra's consistent hashing & many others, you can also juggle vnodes around between nodes as you please, which, if you have hotspot vnodes, gives you some chance to add some anti- affinity for the hot vnodes.
https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/arc...
[+] [-] swinglock|10 months ago|reply
[+] [-] croemer|10 months ago|reply
[+] [-] rad_gruchalski|10 months ago|reply
[+] [-] meling|10 months ago|reply
https://en.wikipedia.org/wiki/Chord_(peer-to-peer)
[+] [-] eulenteufel|10 months ago|reply
[0] https://docs.openstack.org/ironic/latest/_modules/ironic/com...
[+] [-] packetlost|10 months ago|reply
[+] [-] hinkley|10 months ago|reply
Personally I still find that a lot easier to reason about. Especially when it’s time to resize the cluster.
[+] [-] ryuuseijin|10 months ago|reply
[+] [-] charleshn|10 months ago|reply
[0] https://en.m.wikipedia.org/wiki/Rendezvous_hashing
[+] [-] jcartw|10 months ago|reply
[+] [-] samwho|10 months ago|reply