top | item 23821145

(no title)

enitihas | 5 years ago

How do bookkeeper nodes handle addition then. AFAIK, if you add new storage nodes, whether bookkeeper or Kafka, there has to be some repartitioning, or else how will the new node be useful at all?

discuss

order

miguno|5 years ago

See my previous answer further up in this sub-thread. Neither Kafka nor BookKeeper require data repartitioning when adding new nodes. Instead, both require data rebalancing, which moves some data from existing nodes to the newly added nodes.

Think: repartitioning changes the logical layout of the data, which can impact app semantics depending on your application; whereas data (re)balancing just shuffles around stored bytes "as is" behind the scenes, without changing the data itself. The confusion stems probably from the two words sounding very similar.

For Kafka, you use tools like Confluent's Auto Data Balancer (https://docs.confluent.io/current/kafka/rebalancer/index.htm...) or LinkedIn's Cruise Control (https://github.com/linkedin/cruise-control) that automatically rebalance the data in your Kafka cluster in the background. Pulsar has its own toolset to achieve the same.

sciurus|5 years ago

https://jack-vanlightly.com/blog/2018/10/2/understanding-how... goes into this.

> The data of a given topic is spread across multiple Bookies. The topic has been split into Ledgers and the Ledgers into Fragments and with striping, into calculatable subsets of fragment ensembles. When you need to grow your cluster, just add more Bookies and they’ll start getting written to when new fragments are created. No more Kafka-style rebalancing required. However, reads and writes now have to jump around a bit between Bookies.

jackvanlightly|5 years ago

If consumers are keeping up, there will be no reads to the BookKeeper layer as the Pulsar broker will serve from memory.

When reads need to go to BookKeeper there are caches there too, with read-aheads to populate the cache to avoid going back to disk regularly.

Even when having to go to disk, there are further optimizations in how data is laid out on disk to ensure as much sequential reading as possible.

Also note that the fragments aren't necessarily that small either.

enitihas|5 years ago

Ok so it will sacrifice some throughout when a new node is added, as reads and writes need to jump a bit.