Having a central load balancer is going to turn into a nightmare once you start managing a reasonable number of servers. Hardware goes bad (especially in the cloud), and having a single point of failure leaves you at it's mercy.
A load balancer should never be a single point of failure. You should always have multiples.
Also, if the response to this is then 'but it's still a central point of failure', they haven't really removed that in this solution. If the zookeeper cluster dies you lose everything.
Generally if a clustered load-balancer dies and another takes over there's a half second or a couple seconds of transition, but you're back up and running with a very simple architecture. If all of your load-balancers die, something much bigger to worry about is going on.
Really all this does is place the failure mode into a higher-level service with a much greater potential for failure. Zookeeper even makes you specify the size of your cluster and in my experience it's difficult to live update this. I've read they're working on that, but still.
Clustered load-balancers (using Pacemaker/Corosync, keepalived or similar) are very well understood these days. Pacemaker/Corosync can even run within EC2 now, since a couple years ago they added unicast support thus obviating the multicast issues present within EC2.
Additionally, if we want to talk about load then a well configured haproxy/Nginx load-balancer can handle hundreds of thousands of connections a second. If your installation needs more than this then I'm certain you could get a layer to distribute the load-balancing between a set. Obviously another problem to introduce, but still not one you'll reach until you probably have even more traffic than airbnb gets.
For accuracy it's worth pointing out that if Zookeeper dies only registration/deregistration goes down. The local HAProxy processes will continue to run, and applications will continue to be able to communicate with services. Zookeeper isn't a single point of failure for communication; it's just a registration service.
druiid|12 years ago
Also, if the response to this is then 'but it's still a central point of failure', they haven't really removed that in this solution. If the zookeeper cluster dies you lose everything.
Generally if a clustered load-balancer dies and another takes over there's a half second or a couple seconds of transition, but you're back up and running with a very simple architecture. If all of your load-balancers die, something much bigger to worry about is going on.
Really all this does is place the failure mode into a higher-level service with a much greater potential for failure. Zookeeper even makes you specify the size of your cluster and in my experience it's difficult to live update this. I've read they're working on that, but still.
Clustered load-balancers (using Pacemaker/Corosync, keepalived or similar) are very well understood these days. Pacemaker/Corosync can even run within EC2 now, since a couple years ago they added unicast support thus obviating the multicast issues present within EC2.
Additionally, if we want to talk about load then a well configured haproxy/Nginx load-balancer can handle hundreds of thousands of connections a second. If your installation needs more than this then I'm certain you could get a layer to distribute the load-balancing between a set. Obviously another problem to introduce, but still not one you'll reach until you probably have even more traffic than airbnb gets.
reissbaker|12 years ago
vzctl|12 years ago