Taking six years to get to Helm 4 in November doesn’t mean the package manager has been moving slowly or making conservative choices: it’s a testament to how many features the project could add over the years without making the API-breaking changes that require a major version.

Although Helm 4 brings some welcome improvements that add stability and security to production Kubernetes environments – along with a whole new plugin system that opens the project up to experimentation without making the core less stable – it’s the smaller and less visible changes that really explain the new version number. 

“Even though Kubernetes has changed and added a lot [in that time], it’s worked with Helm and we’ve just been adding features [all] along,” Helm maintainer (and SUSE distinguished engineer) Matt Farina told The Stack. 

In fact, Helm 3 is still supported and still getting updates: “Helm 3.20 is about to come out because we've been able to incrementally add features. But even though it fit very nicely in with Kubernetes, over time you build up a certain amount of debt that you want to be able to clean up.” 

Changes under the hood

That cleanup tends to cause some (mostly unwarranted) anxiety in the community. Because so many people rely on not just Helm charts but on the SDK and CLI that allows them to build tools and scripts on top of Helm, the project is very conservative about not breaking APIs they use. But that also blocked some new directions for the project, including aligning with changes in Kubernetes itself. 

“We've had a number of features people asked for where we would say ‘that’s a great idea, but some of the internal architecture just doesn't let us do that well’. We want to enable ourselves to do some of these new things and the only way to do that is to make some breaking changes internally,” Farina explained.

That’s the same thing that happened with Helm 3, when some previous design decisions had to be changed to work better with CRDs and custom controllers. This time it includes longstanding requests like structured logging that’s easier to filter and search with tools like Flux or adding colour to command line output. That sounds trivial but improves usability by making it faster to spot errors. “We couldn’t [add it before] because if you run ‘sed’ across the output, it might break it because of the escape characters,” Farina explained. 

Releasing a major update gave the team permission to make those breaking changes as well as cleaning up some cruft, Farina added. “We had a bunch of options that should have been the default; we made those the default. There were some flags we removed because they didn't need to be there anymore.”

Helm 4 no longer has support for Kubernetes 1.15 and earlier – the ten year old version that the first version of Helm was built for. “There were a bunch of experimental APIs that still worked in Helm. We finally took them out!”

Making upgrades predictable

One much-requested feature in Helm 4: declarative configuration management. Instead of using its own ‘three way merge’ for updates – which can cause chaos if a Helm chart, a Kubernetes operator and a manual command are all trying to manage the same resource and make conflicting changes – Helm now uses Kubernetes’ native ‘server side apply’ to manage controls of resources and fields.

That’s the way GitOps controllers like Argo and Flux already work as well. Instead of silently overriding other changes, Helm will now give you an error if there’s a conflict between multiple attempts to update the same resource.

“We didn’t bring that in before because it would have broken some experiences,” Farina explained. The configuration management is also designed to handle charts that still use three-way merge; while it’s now the default, operations teams can choose when to update existing charts. 

“We’ve tried to make it smart enough that if you're installing something new, it'll use server side apply, but if you've had something you've been going along with that hasn't been server side applied, it won't automatically do it, because between local apply and server side apply, there could be some changes, and we don't want an upgrade to accidentally break something by switching it over.”

Really ready

When you tell Helm 3 to wait until Kubernetes is ready to install a chart, it waits for various resources to say they’re ready – like a database your application depends on that’s being installed by another chart. But ‘ready’ doesn’t necessarily mean that the resource is actually operational (the database might be running but not have loaded the schema you need) and you don’t get an explanation of any failures. 

Helm 4 now uses the standard Kubernetes kstatus library, which wasn’t available when Helm 3 came out and knows about the health of a lot more Kubernetes resources. Deployments involving multiple resource groups and subcharts can also be more granular, so you won’t have race conditions, like an application starting before its database does. That means Helm charts will be better for managing complex, stateful apps that might have required moving to operators in the past.

Secure your chart supply chain

Other internal architecture changes enable the new Charts v3 API that will ship in 2026 with an intelligent loader that can handle multiple types of charts. 

Existing charts should keep working but you’ll want to audit them; not just for minor changes like deprecated flags but because Helm 4 already includes new chart handling that allows you to extend key software supply chain security options to charts. 

Chart packaging is now reproducible, making digital signing, SBOMs and SLSA more reliable. “It makes it better for security verification, because your hashes don't change, so you can verify things like provenance and caching becomes a little bit easier,” Farina explained. (Eagle-eyed readers may notice patterns from Bazel showing up in something as widely used as Helm charts.)

Helm already had caching but because it used the chart name and version, collisions were so common that in practice it ignored the cache and always downloaded charts again. Helm 4 has a content-based cache using a hash of what’s in the chart. That will get smarter, Farina promised, but together with reproducible builds it already improves performance. “If I've installed a chart once, we store that. When you go to install it again, we've got it locally; we can speed things up. You don't need to discover it again. You know where it is, and you've got it in your cache.”

Do better with dependencies

Helm 3 already supports RBAC and authentication for repositories and OCI registries. Helm 4 treats OCI registries as the primary way to store and distribute charts; charts that don’t match the digest representing the container image and layers won’t install, so you don’t need to run a separate chart store or worry about the risks of developers grabbing charts from random locations. 

The v3 API will extend that to dependencies as well. “There's a certain ordering to how you want to do things,” Farina pointed out. “Kubernetes is supposed to be eventually consistent: throw a bunch of resources at it, and eventually everything will just come up. But over the years, we've learned this doesn't actually work as well in practice. When Helm sends things to Kubernetes it already knows that, for the core resources, there's a certain order you really should put them in. Let's put them in that order, because you're going to have better success with Kubernetes reconcilers.”

Packaging up dependencies for applications, like the database even a simple web application like WordPress will always need, might benefit from similar principles. Other tools already offer this but adding it to  Helm means breaking changes for some charts. 

“Helm has a very simple way of doing it, where a version of a dependency pinned at a very specific release is packaged together [with the application] and installed and built in a single sandbox,” he explained. “If you break that model to give people some of these other more flexible dependency handling and ordering features: there isn't a way to do that while retaining those charts that do things in the way the world is today.” 

The v3 API already supports annotations for richer metadata and enables new categories of charts, but moving to a new version also means chart authors can experiment with these new options around ordering and dependency management – without breaking charts that stick to the v2 API.

Tools like Podman already support ‘air gapping’ OCI registries, to make sure your workloads use your approved versions of images and dependencies. “If you have charts and dependencies in OCI registries, it can be difficult to say ‘don't grab everything from the upstream ones, grab them from my cache or my air gapped one’. That can be painful [in Helm],” Farina noted. A future minor release of Helm 4 should improve that by building on what other tools have done to work better with OCI registries.

CRDs remain complex

Helm can’t simply ‘handle’ CRDs the way some users have asked for because as a globally shared resource that Kubernetes does very little to protect, they’re inherently fragile. The Helm team is careful to avoid “building an environment where you can accidentally delete production data through something stupid we've done with Helm”.

Various tools already use Helm as a cluster tool to install controllers and CRDs, or you can put custom resources in a Helm chart to deploy into a cluster. “If you've certain bundles of custom resources you want to install across various clusters, you can package that up as a Helm chart,” Farina pointed out. Feature detection lets you check for CRDs in a cluster. “There’s logic you can do in Helm to detect if CRDs are installed and then deploy custom resources the CRD will work with to bring up features in that cluster, or if they're not installed do something else.”

But upgrading or deleting CRDs is much more complex because Helm doesn’t know what else might rely on them. “A CRD is a cluster level resource and many people who use clusters, especially in large enterprises, don't have the ability to install them or change them. Sometimes they're in shared clusters used by different teams who don't know what other teams are using those clusters. If you remove it or make breaking changes to the CRD you can affect people who don't even know about each other.”

The typical pattern of uninstalling and reinstalling to fix an application that isn’t working makes that even more complicated. “If it’s a CRD-based chart and the CRD gets deleted, all your data gets deleted too so to uninstall it and reinstall it, you've got to make sure you leave the CRDs around while the containers get removed and come back up.”

The Charts v3 API will improve things, he suggested – but it can’t remove the inherent complexity and fragility of CRDs. “Some of this dependency handling work we'd like to do will make it easier to manage CRD charts separately enough that you can do some more interesting things. But with so many things being namespace based while CRDs are cluster wide, you always have to be careful not to break things, especially in shared clusters.”

Stable but creative

Separating experimentation that can more quickly adopt changing Kubernetes patterns from the core functionality where people depend on its stability could help Helm move faster.

“How do we keep Helm stable despite the chaos on around it? Extensions can be a little bit more chaotic while the core stays more stable and gives people that foundation they can trust,” Farina explained.

“So much has been built on top of Helm; we don't want to break that ecosystem, and it's so big that we prioritize that over move fast and break things. But you want to enable all these wonderful things done on top of or through or because of Helm and not get in the way of that. There are some people who say we don’t move fast enough, we don’t break enough things; that's where our extensibility can come in. If we can enable a little bit more creativity, people will come up with ideas and be able to test them in ways we didn't imagine.” 

The SDK already enables some experimentation but a new, experimental plugin system will allow for a much wider range of functionality with new options for customising default Helm behaviours that could eventually become part of the project. “When people have ideas, let's give them room to run and if an idea goes really, really well and there's a lot of uptake, we can fold it into core Helm.”

Current ideas for plugins include packaging up the Helm engine so people can try a different one, or extending charts, he suggested. “Are there things you can do with YAML that people are trying to do with Go templating that would make for a better experience?”

The HelmYS project from YAML co-creator Ingy döt Net adds scripting to YAML so you can use functional programming features to generate dynamic configurations in charts, without breaking compatibility – rather than putting complex logic in Helm Go templates, which quickly get long and hard to read. Currently it uses a Helm post-renderer (which can rewrite the YAML Helm turns Go templates into before sending them to Kubernetes), which döt Net explained as “a clumsy but powerful way to extend Helm”. An extension packaging that up with a chart could offer the better experience Farina described.

Extensions could also be a way to use MCP with Helm for AI agents. “You can make Helm accessible via MCP and there are people who've been playing around with how that would work,” Farina said. Putting that in an extension would make it optional and not lock Helm into something that’s still in flux. “AI and MCP are moving at breakneck speeds, but the core of Helm can be more stable.” 

Portable plugins

Existing plugins for Helm will carry on working, but instead of being arbitrary scripts that can implement custom chart logic in whatever way makes sense to the plugin developer, the new plugins are typed, structured code that’s easier to test and manage – written in WebAssembly.

That’s a vote of confidence in WebAssembly as a technology that improves security and performance, IDC research manager Matthew Flug told The Stack. It’s already used in similar ways in Envoy and Kubernetes itself (and elsewhere).

WebAssembly is ideal for plugins because it offers security by default, but also portability. Because they run in a sandbox rather than being a random executable, there are fewer worries about trusting the plugins that give you extra functionality, which means plugins can safely extend more of Helm. “Helm plugins will be inherently isolated unless explicitly granted access by to things like specific file paths or network domains via WASI API,” Flug pointed out.

But because you can compile once to WebAssembly and then run that code on different platforms without modification, this also makes it easier to extend Helm to more platforms and operating systems (like Windows and macOS). “It should reduce the amount of plugin versions there are as WASM is intended to be ‘write one, run anywhere’. It should simplify the development and maintenance of Helm plugins within the ecosystem,” he predicts.

Continuing to deliver

After 10 years, Helm is clearly a mature project, but Helm 4 also marks a moment in the maturity of the wider Kubernetes platform. What’s effectively the default package manager for the ecosystem is now offers secure, scalable application management and deployment.

Any inconvenience in auditing charts and updating ones that rely on older patterns is more than made up for by the enhanced infrastructure automation and compliance that will come with the security baseline for charts across the Helm ecosystem including better software supply chain governance.

“Helm 4 is important because it changes Helm’s role from a client-side release tool into something that works with Kubernetes’ own delivery model,” CNCF ambassador Jimmy Song told The Stack.

Platform engineers will want to pay attention to the changes, he suggested. ““By adopting server-side apply, Helm no longer overwrites resources but cooperates with other controllers and actors, which makes it far safer and more predictable in GitOps and platform-driven environments.”

“At the same time, the rebuilt plugin system turns Helm into an extensible delivery framework rather than a closed tool. Capabilities that were awkward or fragile in Helm 3—like policy enforcement, custom delivery workflows, or deeper platform integration—can now be implemented cleanly without hacks. In practice, this significantly expands what teams can trust Helm to do in production.”

That trust is earned by sticking to what Helm does well, Farina emphasised. “What we try to do is stay on track and say ‘Helm is a package manager’; we know our space, we’re not trying to scope creep to areas where other tools are doing a great job. We stay where we do really well, we look at how the ecosystem changes over time and we just need to make some changes to Helm, to keep the great package manager where the ecosystem is at.”

The link has been copied!