top | item 45434803

(no title)

shizcakes | 5 months ago

Less featureful than this, but we’ve been doing GRPC client side load balancing with kuberesolver[1] since 2018. It allows GRPC to handle the balancer implementations. It’s been rock solid for more than half a decade now.

1: https://github.com/sercand/kuberesolver

discuss

order

azaras|5 months ago

What is the difference between Kuberesolver and using a Headless Service?

In the README.md file, they compare it with a ClusterIP service, but not with a Headless on "ClusterIP: None".

The advantages of using Kuberesolver are that you do not need to change DNS refresh and cache settings. However, I think this is preferable to the application calling the Kubernetes API.

euank|5 months ago

I can give an n=1 anecdote here: the dns resolver used to have hard-coded caching which meant that it would be unresponsive to pod updates, and cause mini 30s outages.

The code in question was: https://github.com/grpc/grpc-go/blob/b597a8e1d0ce3f63ef8a7b6...

That meant that deploying a service which drained in less than 30s would have a little mini-outage for that service until the in-process DNS cache expired, with of course no way to configure it.

Kuberesolver streams updates, and thus lets clients talk to new pods almost immediately.

I think things are a little better now, but based on my reading of https://github.com/grpc/grpc/issues/12295, it looks like the dns resolver still might not resolve new pod names quickly in some cases.

gaurav324|5 months ago

kuberesolver is an interesting take as well. Directly watching the K8s API from each client could raise scaling concerns at very large scale, but it does open the door to using richer Kubernetes metadata for smarter load-balancing decisions. thanks for sharing!

debarshri|5 months ago

I think with some rate limiting, it can scale. But it might be a security issue as ideally you don't want client to be aware of kubernetes also, it would be difficult to scope the access.

arccy|5 months ago

if you don't want to expose k8s then there's the generic xds protocol

hanikesn|5 months ago

I've been using a standardized xds resolver[1]. The benefit here is that you don't have to patch grpc clients.

[1] https://github.com/wongnai/xds

atombender|5 months ago

Do you know how this compares to the Nginx ingress controller, which has a native gRPC mode?

darkstar_16|5 months ago

We use a headless service and client side load balancing for this. What's the difference ?

arccy|5 months ago

instead of polling for endpoint updates, they're pushed to the client through k8s watches