top | item 34900839

Loxilb: eBPF based cloud-native service load-balancer

94 points| InitEnabler | 3 years ago |github.com | reply

29 comments

order
[+] jiggawatts|3 years ago|reply
It talks a lot about performance, but the actual cloud load balancers such as AWS ELB or Azure Load Balancer are implemented in the software-defined network (SDN) and are accelerated in hardware.

For example in Azure, only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address. This enables staggering throughputs that no “software in a VM” can possibly hope to match.

Personally I wish people would stop with the unnecessary middle boxes. It’s 2023 and there are cloud VMs now that can put out 200 Gbps! Anything in path of a few dozen such VMs will melt into slag.

This is especially important for Kubernetes and microservices in general that are already very chatty and have layers of reverse proxies five deep in surprisingly common configurations already.

[+] betaby|3 years ago|reply
> only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address

Do you have more details how that's done?

[+] dilyevsky|3 years ago|reply
I’m not convinced it offers better cogs actually. You can do line speed on CPU with DPDK these days and even relatively beefy CPUs are probably cheaper than a specialized hardware like a xilinx card.
[+] anandrm|3 years ago|reply
Just curious "only the first two packets in a VM->LB->VM flow will traverse the LB. Subsequent packets are direct from VM-to-VM and are rewritten in the host NICs to merely appear to go via the LB address" , how is it possible to change the Load Balancer IP(VIP) to VM IP in a session . Are you talking about DSR(Direct Server Return) here ?
[+] nijave|3 years ago|reply
It sounds like this is supposed to be a competitor/alternative to MetalLB which you'd generally use outside a cloud environment.
[+] alas44|3 years ago|reply
In case someone else seeks performance benchmarks https://loxilb-io.github.io/loxilbdocs/perf/
[+] vbernat|3 years ago|reply
The details for the bare metal benchmarks are sparse. I would have expected an eBPF solution to outperform the "aging" IPVS by an important margin. Moreover, the peak performance of IPVS is far better (115 vs 57 reqs/s). It would be interesting to know if it is an outlier. A benchmark with an increasing workload over time would be more precise on how to compare both solutions.
[+] LinuxBender|3 years ago|reply
Is this a blind load balancer similar to the iptables statistics module or are there health checks? Are they active or passive health checks? Asking because I saw a comparison to HAProxy.
[+] aeyes|3 years ago|reply
On Kubernetes which they mainly target the Kubernetes control plane would update the list of active service endpoints according to health checks.

Standalone you could do it with the API and a small daemon but out of the box there is no support for health checks (yet).

[+] rektide|3 years ago|reply
> Optimized SRv6 implementation in eBPF

Does anyone else do Segmemt Routing in kube? This particularly caught my eye. I wonder how much other software & setup users need to take advantage of this Loxilb. It's such a different paradigm, specifying much more of the route packets take!

[+] hujun|3 years ago|reply
This looks interesting, specially the support of GTP/SCTP; although it seems quite new, first commit on github is last year, wonder if anyone has used this in production?
[+] goodpoint|3 years ago|reply
The build starts with "Install GoLang > v1.17".

For a eBPF based application? Not good.

[+] tecleandor|3 years ago|reply
Is it because of being Go, the version, or what?
[+] yogaBear|3 years ago|reply
My imaginary kingdom for native language libs that let me interact with eBPF to load balance logic forks.

Then I can put k8s and containers behind me.