Adobe is facing a “tectonic technology shift” driven by AI and an “explosion of content creation” said CEO Shantanu Narayen on its last earnings call.

Under the hood – and even prior to the demands AI workloads placed – Adobe’s suite of software needs a lot of flexible infrastructure to support it.

Powering a big proportion of its computing needs are  450+ Kubernetes clusters that underpin some 4,000 Adobe applications or services.

Across these clusters, Adobe spins up approximately four million containers per day in over 30 regions; both on-premises and across multiple clouds.

“We’re platform first”

Speaking at AWS re:Invent about how Adobe manages this fleet without breaking its engineers or breaking its software, Adobe’s John Weber said that a multicloud approach had potential to be a “cake of complexity.”

Weber, a senior director at the company who oversees engineering across all three Adobe Clouds (Creative, Experience, and Document), said the focus of Adobe has been to massively focus on abstracting away as much of that complexity as possible and automating widely, with a big focus on GitOps.

(GitOps is a an operational framework that uses Git as the single source of truth for managing infrastructure and application configurations…)

See also: Platform Engineering: Lessons, queries from the First Hype Cycle

Referring to Adobe’s “Ethos” cloud-native platform/set of principles, he described it as “the dial tone for the entire fabric of the company.”

Multicloud at Adobe, he said, is not approached as a high-availability construct. (After all, “replicating state within one cloud provider, let alone two, is inherently difficult.”) Instead, Adobe deploys based on strict "fit for purpose" criteria: data residency requirements, security attestations, or proximity to strategic partners, he told an audience on Monday evening.

Death to the Console: The GitOps Standard

Adobe has aggressively adopted GitOps. The days of logging into an AWS or Azure console to manually provision resources are effectively over, he said.

"We use declarative infrastructure to define cluster configurations, network policies, versions and resource definitions, which are all managed as code.” - Adobe director John Weber

The architecture relies on a "single source of truth" stored in Git repositories. Humans edit YAML manifests; machines do the rest. Adobe uses Argo CD to reconcile the state of the cluster with the code in the repository.

"It is impossible at scale to have humans signal between events to say, 'Hey, I'm ready for the next step,'" Weber said. "You need to have GitOps-based workloads that reduce manual errors and accelerate your own delivery."

This abstraction layer is critical. "We want our developers to integrate and manage cloud native infrastructure, not with SDKs or proprietary API's," he added.

And by standardising on CI/CD pipelines, Adobe ensures that a dev shipping code doesn't need to know or care whether their application is landing in an AWS region in Virginia or a private data centre in Europe.

“Our emphasis is really on platform engineering at Adobe, which means we treat all cloud providers, including our own data center, as a first-class citizen. By doing that [you] enable developers to do what they want to do, ship code as fast as possible… we want our developers to integrate and manage cloud-native infrastructure using uniform CI/CD pipelines. 

“Finally, we want our developers to focus on delivering business value, not necessarily figuring out how to apply specific security or compliance attestations which are unique to each specific region or cloud provider.

Under the Bonnet: Open Source and CNCF

Adobe’s platform engineering team leaned heavily into the Cloud Native Computing Foundation (CNCF) ecosystem he told the audience at re:Invent.

The stack includes Helm for package management, OpenTelemetry for observability, and Cluster API (CAPI) for managing Kubernetes components. 

Adobe also uses AWS Controllers for Kubernetes (ACK) to manage cloud-specific primitives, Weber said. This open-source reliance has created a virtuous cycle, allowing internal teams to augment the platform themselves. He highlighted Adobe Firefly’s: The genAI engine’s team contributed a custom pod auto-scaler back to the main codebase.

The Fear of Deletion

Perhaps the most telling sign of Adobe’s maturity, he suggested, is how they handle the destruction of infrastructure. In many enterprises, "zombie clusters" are left running simply because operations teams are terrified of deleting the wrong thing. But automating the "delete" function via GitOps was a significant psychological and technical hurdle, he admitted

"Wait a minute, you can actually delete infrastructure via pull requests. Yes, you can. But it took us many, many, many quarters to get to this level of maturity," Weber said – flagging the obvious risks of automated deletion.

To mitigate them, Adobe implemented a wide range of rigorous software-based guardrails, input validation, and checks for active ingress routes, before the machinery is allowed to terminate a cluster.

"Many, many organisations forget about this most important piece of the life cycle, which is deletion," Weber warned. "Because if you don't delete, you're going to run into sprawl, waste and security issues."

The shift to a hugely automated, declarative model has delivered measurable efficiency gains. It isn't just about cleaner architecture; it is about shipping speed: “With this type of system, we're able to deploy changes across this fleet three times faster," Weber said, pulling up what he admitted somewhat apologetically was a “typical executive scoreboard.”

Additionally, he said, the company has cut provisioning time by 25% and now completes full Kubernetes cluster upgrades twice as fast as before. 

For engineering leads looking to replicate this model, Weber offered a final piece of advice on balancing standardisation with flexibility: “Cookie-cutter scales. But when I go to a bakery, I see lots of different cookies that I want to eat… every system has its own unique challenge and business requirements. You need to be ready to have repeatable stamp setups."

Ultimately, the success of the platform comes down to treating the internal developer as a customer, and, despite the hate it gets, "Kubernetes is your friend. And with any friendship, you need to invest," Weber concluded. "If you don't invest, [you are] probably not gonna have a friend anymore."

The link has been copied!