It talks a lot about performance, but the actual cloud load balancers such as AWS ELB or Azure Load Balancer are implemented in the software-defined network (SDN) and are accelerated in hardware.
For example in Azure, only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address. This enables staggering throughputs that no “software in a VM” can possibly hope to match.
Personally I wish people would stop with the unnecessary middle boxes. It’s 2023 and there are cloud VMs now that can put out 200 Gbps! Anything in path of a few dozen such VMs will melt into slag.
This is especially important for Kubernetes and microservices in general that are already very chatty and have layers of reverse proxies five deep in surprisingly common configurations already.
> only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address
I’m not convinced it offers better cogs actually. You can do line speed on CPU with DPDK these days and even relatively beefy CPUs are probably cheaper than a specialized hardware like a xilinx card.
Just curious "only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address" ,
how is it possible to change the Load Balancer IP(VIP) to VM IP in a session . Are you talking about DSR(Direct Server Return) here ?
The details for the bare metal benchmarks are sparse. I would have expected an eBPF solution to outperform the "aging" IPVS by an important margin. Moreover, the peak performance of IPVS is far better (115 vs 57 reqs/s). It would be interesting to know if it is an outlier. A benchmark with an increasing workload over time would be more precise on how to compare both solutions.
Is this a blind load balancer similar to the iptables statistics module or are there health checks? Are they active or passive health checks? Asking because I saw a comparison to HAProxy.
Does anyone else do Segmemt Routing in kube? This particularly caught my eye. I wonder how much other software & setup users need to take advantage of this Loxilb. It's such a different paradigm, specifying much more of the route packets take!
This looks interesting, specially the support of GTP/SCTP; although it seems quite new, first commit on github is last year, wonder if anyone has used this in production?
[+] [-] jiggawatts|3 years ago|reply
For example in Azure, only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address. This enables staggering throughputs that no “software in a VM” can possibly hope to match.
Personally I wish people would stop with the unnecessary middle boxes. It’s 2023 and there are cloud VMs now that can put out 200 Gbps! Anything in path of a few dozen such VMs will melt into slag.
This is especially important for Kubernetes and microservices in general that are already very chatty and have layers of reverse proxies five deep in surprisingly common configurations already.
[+] [-] betaby|3 years ago|reply
Do you have more details how that's done?
[+] [-] dilyevsky|3 years ago|reply
[+] [-] anandrm|3 years ago|reply
[+] [-] nijave|3 years ago|reply
[+] [-] alas44|3 years ago|reply
[+] [-] vbernat|3 years ago|reply
[+] [-] LinuxBender|3 years ago|reply
[+] [-] aeyes|3 years ago|reply
Standalone you could do it with the API and a small daemon but out of the box there is no support for health checks (yet).
[+] [-] nijave|3 years ago|reply
[+] [-] cyberge99|3 years ago|reply
Cloudrizi (loxilb-io)
[+] [-] userbinator|3 years ago|reply
https://i.redd.it/krnyqtpgdy221.png
[+] [-] rektide|3 years ago|reply
Does anyone else do Segmemt Routing in kube? This particularly caught my eye. I wonder how much other software & setup users need to take advantage of this Loxilb. It's such a different paradigm, specifying much more of the route packets take!
[+] [-] hujun|3 years ago|reply
[+] [-] goodpoint|3 years ago|reply
For a eBPF based application? Not good.
[+] [-] tecleandor|3 years ago|reply
[+] [-] travbrack|3 years ago|reply
[+] [-] victorbjorklund|3 years ago|reply
[+] [-] yogaBear|3 years ago|reply
Then I can put k8s and containers behind me.