The open-source world is no stranger to the "rug pull." But when Redis, the de facto standard for in-memory data caching, swapped its permissive license for a restrictive commercial one in early 2024, the shockwaves reached every corner of the technology world, including those at hyperscale clouds.
Madelyn Olson, a Redis maintainer and AWS engineer, was swift to react. She’d been working on the project since 2018, including to ensure it had in-transit encryption enabled upstream and contribute features.
(“That was causing us a lot of pain, because every time Redis released new features, we had to go and merge those changes into our internal fork," she recalls. "So our goal was to lower that technical debt by upstreaming some of our code to Redis open-source…”)
She has since become one of the primary architects of Valkey. Now a flagship project under the Linux Foundation, Valkey isn’t just a "protest fork" of Redis; it is rapidly becoming a widely adopted and deeply maintained production standard for high-performance caching and real-time data.
The Stack sat down with Olson to discuss the transition, the roadmap, and learn more.
When the license change hit, a fractured post-Redis OSS landscape briefly looked likely. While forks like KeyDB (multithreaded but lagging in versions) and Redict (focused on copyleft purity) emerged, Valkey won the "Battle of the Forks" by securing the backing of the heavy hitters: AWS, Alibaba, Ericsson, Google, Huawei, and Oracle, among others that are now regular maintainers.
"Valkey was a developer-driven fork," Olson explains. "We viewed it as critical to keep the BSD license. Most laypeople don't want to navigate the nuances of LGPL or SSPL – they just want to build. We wanted to continue where we left off without the threat of another license shift." That continuity has paid off. Valkey has surpassed 70 million container pulls, and sees a million a week.
Valkey: What’s under the hood?
For the uninitiated, Valkey (like Redis) is an in-memory key-value database used predominantly as a cache to offload expensive database queries.
(As Olson puts it: “A cache is used to store operations so that you can offload work from a back-end database, so you'll run an expensive back end query on something like a SQL database like Postgres, and then you'll store that result in Valkey. And the reason you do this is Valkey is much faster; all the data is stored in RAM, so it can serve stuff both with very low latency as well as with very high throughput. So the cost per operation is much lower; most large applications are going to be serving a lot of their read traffic through caches.”)
Olson’s team is also pushing the engine into more complex territory, including to serve generative AI and microservices workloads at enterprise scale.
While specialised vector databases exist, Valkey is carving a niche in Vector Similarity Search (VSS) for agentic workloads where latency is the primary bottleneck: "If you’re doing agentic workloads that require queries at every step, accuracy (recall rate) and speed are paramount," says Olson.
(AWS recently claimed that Valkey's vector search on ElastiCache delivers "the lowest latency with the highest throughput and best price-performance at 95%+ recall rate among popular vector databases on AWS."
By keeping the entire vector dataset in RAM, Valkey avoids the disk-lookup latency inherent in disk-based solutions like PGVector, making it ideal for semantic caching – storing LLM embeddings to avoid redundant, costly foundational model calls. And the community is innovating hard…
The "Valkey 9" breakthroughs
The recent release of Valkey 9 (on October 2025) introduced three significant architectural shifts that solve long-standing Redis pain points, Olson explains.
Multiple databases in cluster mode: This allows for lightweight multi-tenancy within a single horizontally distributed cluster—a feature "hotly requested" for years but never realised in the original project’s distributed mode. (“People want to be able to scale applications easily” says Olson. “Our recommendation had always been, ‘oh, have a bunch of different clusters and scale them independently.’ But many small, self managed workloads don't want to have 50 clusters. They would like to have one cluster and just focus on scaling that..”)
Hash Field Expiration: Users can now set Time-to-Live (TTL) on individual elements within a hash (a data type). This is a game-changer for the likes of session storage and even online gaming lobbies, where individual players in a "bucket" may time-out at different intervals and engineers want more granular control over orchestrating that kind of function in the back-end.
Atomic Slot Migration: Scaling a cluster horizontally requires repartitioning data (slots). Historically, if a node died during migration, the system was left in a half-completed state. Valkey now pre-stages data and transfers ownership atomically, significantly increasing cluster reliability, Olson explains crisply.
The Roadmap: Solving the durability dilemma
There’s more to do! Perhaps the most ambitious project on the horizon is a complete overhaul of Valkey's durability system. Currently, the "Append Only File" (AOF) system struggles with high availability in distributed environments.
Olson reveals that the community is currently "excitedly arguing" over two potential consensus mechanisms to ensure synchronous replication:
- A Raft-based implementation: Ensuring writes are acknowledged by a quorum before being committed.
- A Lease-based system: Where a primary node acquires a lease to apply rights.
"We think Raft is going to win," Olson admits.
"But Raft requires three nodes for a quorum. We’re looking at adding 'witnesses' (an extension to the original Raft paper) to allow two-node configurations to break ties during a network partition…"
Is Valkey still just "Free Redis"? Not anymore.
While the two projects still trade occasional API ideas and remain largely compatible, the trajectory has diverged, with Valkey focusing on the security, performance, and reliability that users care about: "Our goal is not to match every feature of Redis," Olson notes, adding that "we’re building for the hyperscalers and the community contributors like Ericsson and Verizon who use this in telco equipment."
For others also wanting a straightforward OSS stack without any licence or lockin risk, the migration path from Redis 7.2 to Valkey is essentially a seamless patch, she adds. With the backing of AWS, Alibaba, and Google, among others, and a roadmap focused on stability and modern AI workloads, the "fork in the road" has led not to a proper highway.
Published in partnership with AWS.
