Skip to content

Search the site

300TB SSD drives are coming and NAND evolving fast. Hard drives will have nowhere left to hide by 2026

"There’s a great deal of innovation taking place right now in NAND, with all major flash manufacturers demonstrating significant density increases this year"

Data storage has been on an incredible evolutionary journey since the 1980s – a period that marked the beginning of the modern era for this very important, but sometimes overlooked part of the technology landscape, writes Alex McMullan, CTO, International, Pure Storage. We’re now entering a new era of progress, made possible by great leaps forward in technological capabilities that will see a shift to high density flash drives with a capacity of up to 300TB in the not too distant future, and the end of spinning disk storage.

At this significant moment for the industry, it’s an opportune time to consider the evolution of data storage, why further change is needed and what this means for the technology sector.

A brief history of the data storage market

Broadly speaking, the modern era of data storage commenced in the 1980s, with the acceptance of a need to move away from Direct Attached Storage. This led to a shift in mindset that was less concerned about data residing on specific servers, which gave rise to the beginnings of virtualisation. Then, in the early 90s, we saw the advent of more advanced Network Attached Storage. Throughout this era, hard disk drives (HDDs) were the dominant storage media.

Incremental evolution in data storage continued in the early 2000s, but the next big shake-up to market came in 2007, with the advent of the public cloud. This created downward pressures on the cost of data storage across the market and although solid state storage (SSD), or flash storage had been invented by that time, HDD remained popular, with public cloud providers being the biggest customers. This remains the case to the present day and arguably, public cloud providers have become the life support system for HDD manufacturers.

The data storage market has reached an inflection point

As a technology, HDD has peaked, along with innovation and investment in the sector has tailed off to almost nothing. We have barely seen a single IPO in the HDD data storage sector since 2015. The key reason for this is that data storage is a very tough proving ground and arguably the most difficult of the technology sectors to achieve success in because customer expectation is so high. In contrast, investment and innovation in the SSD sector continues at record pace.

It's widely recognised that from a technology standpoint, SSD has numerous advantages over HDD, including durability, reliability, speed and performance. Significantly, in data centres, SSD arrays consume far less power than HDDs and occupy less space. This is a very important point to consider, in light of the fact that the IEA estimates data centres consume around 1.5% of the world’s power supplies.

Until now, the biggest barrier to more widespread adoption has been the price of the technology, when compared with HDD. That gap has narrowed dramatically though, driven by continued innovation, superior utilisation capability (greater storage utilisation can be achieved with SSDs in comparison with HDDs) and the fact they consume less power. This is important in the context of current high energy costs and the drive for organisations to reduce energy use and achieve ESG goals. A shift to utilisation of modern flash technology in data centres could reduce power consumption by as much as 80%.

Denser drives – the path to further innovation in SSD

With demand for data storage set to continue unabated, there’s widespread understanding across the sector of the need for denser drives. While it is possible to further increase the density of current HDD drives, there’s a closer physical limit to the amount of information that can be stored on a magnetic medium. Quite simply there’s much greater scope for innovation in SSD.

The process for achieving greater density in SSD is broadly similar to that of HDD development. Where it differs is in the capacity to successfully deploy increasing amounts of chips on a drive, without damaging the medium physically. Now, increasing density through layering of chips is being amplified by a new development – the creation of larger ‘plates’ on which the chips reside, resulting in far greater storage capacity, which is further enhanced by layering. When these chips are stacked inside a physical drive, the path to SSD innovation and growth becomes clear.

To create even more powerful SSDs, direct flash modules (DFMs) can be utilised, which dramatically enhances their efficiency. In essence, this approach allows the chips to behave as a symphony orchestra, rather than a set of individual notes.

Key benefits of denser SSDs

In addition to their ability to drive down the cost per gigabyte of storage, power efficiency and clear path to further innovation, denser SSDs have other advantages - particularly in relation to higher performance, a recognised shortcoming of HDDs. SSDs deliver far greater IOPS than their HDD counterparts. Larger hard drives need to deliver increased IOPS performance, as storage capacity is meaningless if users cannot access their data quickly, however they often struggle to do so, which can result in stranded capacity.SSDs aren’t affected by this issue because flash performance stays predictable even as utilisation increases.

Superior resiliency is another advantage SSDs have over HDD storage. Flash technology facilitates faster rebuild times, meaning less bits need to be dedicated to resiliency structures, which results in better cost efficiency. This is important when considering redundancy strategy. Flash-based systems are also much faster at backup and recovery, putting them at a significant advantage over HDD in the event of needing to recover from a ransomware attack. SSD systems are also far more reliable because they don’t have moving parts. They fail less often, leading to fewer replacements, less time replacing failed components, lower risk and no skyrocketing maintenance costs. All of this translates to a lower TCO.

The NAND advantage

When SSDs are constructed with NAND flash memory, they don’t need to be over provisioned at the same level as competing mediums. Over provisioning is common practice in SSD manufacturing. It compensates for the failure of cells over time that occurs as a result of read and write processes by adding extra chips that are activated as failures occur. However, by utilising more reliable NAND chips over-provisioning can be greatly reduced, resulting in more efficiency and cutting down on e-waste.

It’s noteworthy that NAND is now becoming more cost competitive, with the price-per bit falling faster than HDD. There’s a great deal of innovation taking place right now in NAND, with all major flash manufacturers demonstrating significant density increases this year. We’re seeing over 200 layers of stacked 3D NAND in some cases, which will drive down the cost of NAND. Analysts predict that NAND prices will decline through the rest of 2023.

Organisations of all kinds are increasingly looking at both new and historical data to inform their operations, placing ever greater demands on data storage. Denser SSDs are ideally suited to addressing this issue. For example, Formula One teams such as Mercedes-AMG Petronas F1 are retaining more historical data than ever before, increasing their data storage requirements by a factor of ten. They’ve benefited greatly from SSD-based storage technology.

The same can be observed in a wide variety of other sectors, including finance and insurance, healthcare (particularly where large volumes of genomic data is involved), and the mining industry. In addition, adoption of denser SSDs is gathering pace across sectors that are deploying AI and ML and voice recognition applications - all of which require and generate very large data sets. This trend is accelerating with the recent popularity of large language models and generative AI, as a consequence driving much higher requirements for training, inference and checkpoints in the various pipeline stages of the model.

The road ahead for denser SSDs

Rapid evolution in SSDs, driven by the NAND industry, is set to reshape the data storage market - and this is already happening. We’ll see the commercial availability of a 300-terabyte drive in around three years. When released this will be the largest SSD available but in the interim period, we will see significant milestones, with the availability of a 75TB SSD this year and a 150TB SSD next year.  We are rapidly approaching resolution of the cost of storage per gigabyte discussion.

As a result, the focus is moving even more quickly towards energy density and the downstream impacts on CO2 generation and water consumption in datacenters. The hyperscalers are also increasingly focused on Watts/TB and IOPS/TB as HDDs and SSDs increase in size due to widening constraints on electricity availability and government scrutiny on local impact of datacenters. The (as yet) unanswered question is can the NAND fabs make enough chips to replace HDD capacity if disk drive sales continue on the same trajectory and energy supply becomes nation-critical. Given the world’s glacial progress towards the goals laid out in the 2015 Paris Climate Accords and our unstinting appetite for data in all its forms, that answer is getting closer.

See also: Climate risk data is a hot mess. These open source pioneers want to set things straight.