top | item 13980805

(no title)

tzaman | 9 years ago

One thing that's still missing but would be quite valuable is a good egress approach/configuration. For example, we use an external PostgreSQL provider in which I can configure access IPs. Since K8s nodes (on GKE) get different IPs, I have to be very loose with CIDR selection, which I don't like.

discuss

order

seeekr|9 years ago

Wouldn't The Right Way (TM) be to have a daemon running on the cluster that's watching either your nodes (if you want to allow access from all your nodes) or specific pods and then call your PostgreSQL provider's API to let it know about valid access IPs dynamically?

xur17|9 years ago

That's the approach we used when connecting to a legacy Mongo cluster from a GKE cluster. We ran a pod that subscribed to the Kubernetes api, and updated security group rules in AWS as the nodes changed.

thesandlord|9 years ago

Is IP whitelisting your only option? I really haven't found a good way to pull this off in Kubernetes. You could set up and instance outside the cluster to act as a proxy, but that just feels like a very substandard solution.

On GCP, you can use the SQL Proxy [1] to avoid IP whitelisting or manual SSL setup. Postgres on GCP is still beta, so you probably don't want to run a production DB with it, but hopefully your provider has a similar option.

1: https://cloud.google.com/sql/docs/postgres/sql-proxy

(I work on Google Cloud)

013a|9 years ago

But aren't the node IPs relatively constant, barring complete node failure? You could just whitelist all of the node IPs you'd expect to connect.

chatmasta|9 years ago

This problem could be solved by finer grained control over network routing configuration in general. The problem seems to be that this configuration differs depending on what network virtualization technology/driver you choose, and so implementing it is best left outside the scope of kubernetes.

bboreham|9 years ago

I don't think there is even an open issue for this. You should file one!