top | item 41222101

Kubernetes Cost Management with the New OpenCost Plugin for Headlamp

57 points| yolossn | 1 year ago |headlamp.dev | reply

19 comments

order
[+] nopurpose|1 year ago|reply
Is there a way to allocate cost to every pod on a node when node cost is given without break down by resource type and pod resources are not in same ratio as node resources?

Lets say node has 8 CPUs and 32 GB RAM (1:4 ratio). If every pod uses same ratio for its CPU:MEM then math is simple: node cost is split across all pods proportional to their resource allocation.

How to make fair calculation if pod resource ratio is different? In extreme it is still simple - lets say there is a pod with 8 CPU and 2 GB RAM, because no pods can fit into node whole node cost is allocated to that pod.

What if running pod is 6 CPU and 16 GB RAM and another pod with 2 CPU and 16 GB RAM is squeezed in. How to allocate node cost to each? It can't be just node cost / # of pods, because intiutively beefier pods should recive larger share of node cost as they prevent more smaller pods to fit in, but how exactly to calculate it? "weight" of pod on CPU dimenstion is different than on MEM dimension.

[+] zxspectrum1982|1 year ago|reply
Red Hat Insights Cost Management does cost calculation and it calculates exactly how much each pod is costing, no matter what ratios, or node sizes, or discounts you may have.

It looks at what nodes are running on each cluster, how much each node is costing (it reads the actual cost from your cloud bill, including any discounts you may have), then it looks on which node(s) each pod is running, and then it calculates how much each pod on each node is costing.

https://docs.redhat.com/en/documentation/cost_management_ser...

It's free for Red Hat customers, both for cloud costing (AWS, Azure, GCP, OCI) and OpenShift costing. No support for EKS, AKS or other third-party Kubernetes, though.

https://access.redhat.com/products/red-hat-cost-management

https://console.redhat.com/openshift/cost-management/

[+] TheDong|1 year ago|reply
I believe for AWS, they use these ratios: https://github.com/opencost/opencost/blob/c2de805f66d0ba0e53...

So in your example, 6 CPU + 16GiB is roughly 2x more than 2 CPU and 16GiB, so if that node cost say $6/hr, you'd expect it to be allocated $2 to the first and $4 to the second.

They have these weights for various clouds here: https://github.com/opencost/opencost/tree/c2de805f66d0ba0e53...

I'm sure someone will correct me if I'm wrong here, I'm not actually familiar with opencost, don't trust what I'm saying.

[+] thewisenerd|1 year ago|reply
would like to know if someone's got a more objective approach;

what we currently do is just a maxOf;

take CostPerGB (memory) and CostPerCore (cpu); and costPerPod = max(pod.requests.cpu * CostPerCore, pod.requests.memory * CostPerGB)

at an overall basis, this was ~20% off to actuals when we'd checked this last, so we clearly call out that the costs are "indicative" and not exact.

[+] psibi|1 year ago|reply
I was going through Opencost documentation which this project uses and it looks you need to setup AWS Athena if you want the cloud cost to be displayed for AWS: https://www.opencost.io/docs/configuration/aws#aws-cloud-cos...

Does the Athena does the actual processing/computation of costs ? What is the usual cost for running Athena ?

It also seems strange that I have to put the IAM keys into secrets instead of using IAM role for service account for configuring it.

[+] jarito|1 year ago|reply
The Cost and Usage Report (CUR) from AWS is just a fine-grained listing of all the resources in your account and their cost. It can be dumped out on different schedules (hourly, daily, monthly) and in different formats (CSV, Parquet).

It is pretty common to configure the CUR files to be dumped into your S3 account and query them via Athena. Athena is billed as $ per TB scanned ($5 last time I looked), so the cost will be based on how often the data is being queried. Downside is that each query can take quite a while to execute depending on data size.

The other common option is to ingest the CUR data into Redshift which gives you better control / options for performance, manipulation, etc. but requires that you set up and manage Redshift.

Hard to tell exactly what the Athena cost here would be as it depends on the number of assets in the account and the frequency in which you are querying the CUR. However, you can issue quite a bit of Athena queries on CUR data for most AWS use cases without incurring too much cost. Unless you have a rapidly changing environment (e.g. hundreds of k of assets turning over daily) or just tons of standing assets, you should be safe to assume hundreds a day at the most? Probably much less for most use cases. This is assuming they are querying once and storing rather than real time querying all the time and normal usage patters, etc.

[+] jakepage91|1 year ago|reply
Is the cost shown only the prices incurred post plugin integration or is there a way to show retroactive costs by comparing k8s object creation dates for example?
[+] cudder|1 year ago|reply
So is Headlamp the state of the art in Kubernetes cluster management ever since Mirantis first enshittified Lens and then tucked away its sources?
[+] blixtra|1 year ago|reply
We, the Headlamp project, don't make any claims about being state-of-the-art as that's hard to define. But we do think Headlamp ranks high among having the best user experience and believe the fact that we're a 100% open-source project is a huge plus compared to some other projects in the space.

I think one area that we are rather different than other projects is that Headlamp is not only focused on end-users but also for teams looking to build their own Kubernetes UX by leveraging the Headlamp plugin system. Our thinking is that this will foster broader community participation and make Headlamp the most viable project in the space.

If you find that there is anything missing please file an issue and we'll consider it: https://github.com/headlamp-k8s/headlamp/issues/new

[+] alemanek|1 year ago|reply
I really like k9s. It’s plugin model is super easy to work with, if a bit constrained, as well.
[+] nopurpose|1 year ago|reply
I didn't try Headlamp, but I moved to AptKube from Lens and been happy since then. It might not be a best in class, but it is snappy and doesn't require any cloud accounts.