top | item 45902604

Helm 4.0

167 points| todsacerdoti | 3 months ago |github.com

175 comments

order
[+] buster|3 months ago|reply
After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.

Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.

Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!

[+] verdverm|3 months ago|reply
Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it

Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth

[+] nullwarp|3 months ago|reply
I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.
[+] lxe|3 months ago|reply
Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.

I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.

[+] jadbox|3 months ago|reply
I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.
[+] e12e|3 months ago|reply
I only whish terraform was more recognized by upstream projects, like postgres, tailscale, ingress operators.

A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.

If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.

[+] vbezhenar|3 months ago|reply
I've embraced kustomize and I like it. It's simple enough and powerful enough for my needs. A bit verbose to type out all the manifests, but I can live with it.
[+] Hamuko|3 months ago|reply
Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.
[+] ctm92|3 months ago|reply
Kustomize with ArgoCD is my go to
[+] timiel|3 months ago|reply
Do you have any resources regarding using tf to handle deployments ?

I’d love to dig a bit.

[+] dev_l1x_be|3 months ago|reply
Could you explain this a bit? Is helm optional part of the k8s stack?
[+] zdw|3 months ago|reply
Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.

Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.

Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.

[+] torginus|3 months ago|reply
Imho, anyone who thought putting 'templating language' and 'significant whitespace' together is a good idea deserves to be in the Hague
[+] lucyjojo|3 months ago|reply
we use cue straight to k8s resources. it made life way better.

but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.

[+] honkycat|3 months ago|reply
Helm sucks.

Helm, and a lot of devops tooling, is fundamentally broken.

The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.

This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.

Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.

I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.

You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.

Maybe the answer is a CDK that outputs helm charts.

[+] cryptonector|3 months ago|reply
Ok, thought experiment: why not use the k8s JSON interfaces and use jq to generate/template your deployments/services/statefulsets/argo images/etc.?

You say you want a functional DSL? Well, jq is a functional DSL!

[+] jarym|3 months ago|reply
So many people complaining about Helm but I'll share my 2 experiences. At my last 2 companies we shipped Helm charts for administrators to easily deploy our stuff.

It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.

Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.

Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.

[+] btown|3 months ago|reply
I recently learned about Helmfile's support for deep declarative patching of rendered charts, without requiring full forks with value-template-wiring. It's been a gamechanger!

https://helmfile.readthedocs.io/en/latest/advanced-features/...

In your context, it might help certain clients. It does require that the upstream commit to not changing its architecture, but if the upstream is primarily bumping versions and adding backwards-compatible features, and if you document all the patches you're recommending in the wild, it might be an effective tool.

[+] ojhughes|3 months ago|reply
Helm shines when you’re consuming vendor charts (nginx-ingress, cert-manager, Prometheus stack). It’s basically a package manager for k8s. Add a repo, pin a version, set values, and upgrade/rollback as one unit. For third-party infra, the chart’s values.yaml provides a fairly clean and often well documented interface
[+] sprior|3 months ago|reply
I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.
[+] jasonvorhe|3 months ago|reply
Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job. Basically every job or project I accept involves working with helm in some capacity and I'm just tired of working with mostly garbage helm charts, especially big meta-charts or having to fork a chart to add a config parameter value override somewhere. Debugging broken chart installs or incomplete upgrades is also nothing but pain. Most helm charts remind me of working with ansible-galaxy roles around ~2015.
[+] smetj|3 months ago|reply
Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.

A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...

[+] greenwallnorway|3 months ago|reply
Can I hear from those of you who have had a good IAC experience? What tools worked well?
[+] vxvrs|3 months ago|reply
As someone who stared out with Helm and has not used any of its alternatives, I had no idea how hated it is. Maybe it's just because of how I use it, but once I got the hang of the template charts I don't feel like I'm running into any hurdles while using it.
[+] mt42or|3 months ago|reply
Amazing how people are complaining while proposing shit solutions. Seems like nobody is doing infra seriously there.
[+] solatic|3 months ago|reply
Most people in this thread, it seems, just want a simple way to manage Kubernetes manifests, something that keeps track of different settings for different environments and what's in common for each environment in order to generate the final manifests for an environment. If so, Helm is over-engineered for your use-case. Stick with Kustomize or jsonnet.

Helm's contribution (as horrible as text templating on YAML is) is, yes, to be a package manager. Part of a Helm chart includes jobs ("hooks") that can be run at different stages (pre-install, pre-upgrade, etc.) as well as a job to run when someone runs "helm test", and a way to rollback changes ("helm rollback"), which is more powerful than just rolling back a Deployment, because it will rollback changes to CRDs, give you hooks/jobs that can run pre- and post-rollback, etc.

Helm charts are meant to be written by someone with the relevant skills sitting next to the developers, so that it can be handed off to another team to deploy into production. If that's not your organization or process, or if your developers are giving your ops teams Docker images instead of Helm charts, you're probably over-engineering by adopting it.

[+] aduwah|3 months ago|reply
CRDs in helm are such a freaking nightmare! You want a clean install because you are in a hole? No worries let's remove all the crds and delete/create everything else relying on them. Separating the two (crds and other objects) is a solution but then you have a bastardize thing to maintain that is not latching upstream

Also I cannot count how many times I had to double/triple run charts because crds were in a circular dependency. In a perfect world this must not be an issue but if you want to be a user of an upstream chart this is a pain

[+] hylaride|3 months ago|reply
The core problem, I think, is that K8s is overly complicated for 95% of deployments out there, but it's become the default standard.

People then start creating tooling to mask some of the complexity, but then said tooling grows to support the full K8s feature set and then we're back to square one.

Because the rush to K8s was so fast (and arguably before it was ready) the tooling often became necessary.

> Helm charts are meant to be written by someone with the relevant skills sitting next to the developers.

That makes sense for large organizations, but it still gets complicated depending on how your service plugs into a greater mesh of services.

I currently treat helm the same way I treat Cloudformation on AWS (another horrid thing to deal with). If some third party has it so that I can easily take the template and launch it, then great. I don't want to go any further under the hood than that.

[+] annexrichmond|3 months ago|reply
Helm is the necessary evil for Kubernetes chose YAML
[+] vibe_assassin|3 months ago|reply
I am a fan of helm. The templating language can be pretty ugly sometimes, but your helm charts can be as simple as you want, and the basic functionality along with dependencies work fine. Some of my helm charts are basically just straight K8s manifests with minimal templating. I like helm because it lets me encapsulate a deployment in a single package, if I want to add some insane templating logic, that's on me.
[+] JohnMakin|3 months ago|reply
> CLI Flags renamed

> Some common CLI flags are renamed:

> --atomic → --rollback-on-failure > --force → --force-replace

> Update any automation that uses these renamed CLI flags.

I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?

It doesn't sound like a big deal but in practice it's often a massive pain in the ass.

[+] webcoon|3 months ago|reply
And it STILL uses text-based Go templates instead of a proper language based on structured input and output? This was always my main pain point with Helm and also of many others I talked to. This major upgrade was years in the making and they couldn't add support for a single of many available options like CUE, JSONNET, or KCL? What an utter waste.
[+] beefnugs|3 months ago|reply
nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose
[+] mch82|3 months ago|reply
Can you recommend any articles about minimum scale necessary to make Kubernetes worth it?
[+] sureglymop|3 months ago|reply
I really don't like helm. I think we have arrived at abstraction over abstraction over abstraction.

The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.

There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.

Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.

And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.

[+] sgarland|3 months ago|reply
> I think we have arrived at abstraction over abstraction over abstraction.

> The thing is, kubernetes resources are already an abstraction.

Your first comment was more accurate - they’re heavily nested abstractions.

A container represents a namespace with a limited set of capabilities, resources, and a predefined root.

A Pod represents one of more containers, and pulls the aforementioned limitations up to that level.

A ReplicaSet represents a given generation of a set amount of Pods.

A Deployment represents a desired number of Pods, and pulls the ReplicaSet abstraction up to its level to manage the stated end state (and also manages their lifecycle).

I think most infra-adjacent people I’ve worked with who use K8s could accurately describe these abstractions to the level of a Pod, but few could describe what a container actually is.

> It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

It is not a good thing, no. There is an entire generation of infra folk who have absolutely no clue how computers actually work, and if given an empty bare metal server connected to a LAN with running servers, would be unable to get Linux up and running on the empty server.

I am not against K8s, nor am I against the cloud - I am against people using abstractions without understanding the underlying fundamentals.

The counter to this argument is always something along the lines of, “we build on abstractions to move faster, and build more powerful applications - you don’t need to understand electron flow to use EC2.” And yes, of course there’s a limit; it’s probably somewhere around understanding different CPU cache levels to be well-rounded. However, IME at the lower levels, the assumption that you don’t need to understand something to use it doesn’t hold true. For example, if you don’t understand PN junctions, you’re probably going to struggle to effectively use transistors. Sure, you could know that to turn a silicon BJT transistor on, you need to establish approximately 0.7 VDC between its base and emitter, but you wouldn’t understand why it’s much slower to turn off than to turn on, or why thermal runaway happens, etc.

[+] nullify88|3 months ago|reply
Running my home lab at home, I've grown sick of constant Renovate PRs against the helm charts in use. I recall one "minor" update not long ago recently in CoreDNS was messing with the exposed ports in the service and installs broke for a lot of folks. If I need to run some software now, I `helm template` the resources and commit those to git. I'm so tired of some random "Extended helm chart to customise labels / annotations in $some resource" change notes. Traefik and Cilium are the only helm charts I use, the rest I `helm template` in to my gitops repo, customize and forget.

At Dayjob in the past, we've debugged various Helm issues caused by the internal sprig library used. We fear updating Argo CD and Helm for what surprises are in store for us and we're starting to adopt the rendered manifests pattern for greater visibility to catch such changes.