JEDEC finalizes HBM4 memory standard with major bandwidth and efficiency upgrades

2 days ago 3

JEDEC has published the official HBM4 (High Bandwidth Memory 4) specification under JESD238, a new memory standard aimed at keeping up with the rapidly growing requirements of AI workloads, high-performance computing, and advanced data center environments. The new standard introduces architectural changes and interface upgrades that seek to improve memory bandwidth, capacity, and efficiency as data-intensive applications continue to evolve.

HBM4 continues the use of vertically stacked DRAM dies, a hallmark of the HBM family, but brings a host of improvements over its predecessor, HBM3, with significant advancements in bandwidth, efficiency, and design flexibility. It supports transfer speeds of up to 8 Gb/s across a 2048-bit interface, delivering a total bandwidth of up to 2 TB/s. One of the key upgrades is the doubling of independent channels per stack—from 16 in HBM3 to 32 in HBM4—each now featuring two pseudo-channels. This expansion allows for greater access flexibility and parallelism in memory operations.

In terms of power efficiency, the JESD270-4 specification introduces support for a range of vendor-specific voltage levels, including VDDQ options of 0.7V, 0.75V, 0.8V, or 0.9V, and VDDC options of 1.0V or 1.05V. These adjustments are said to contribute to lower power consumption and improved energy efficiency across different system requirements. HBM4 also maintains compatibility with existing HBM3 controllers, enabling a single controller to operate with either memory standard. This backwards compatibility eases adoption and allows for more flexible system designs.

Additionally, HBM4 incorporates Directed Refresh Management (DRFM), which enhances row-hammer mitigation and supports stronger Reliability, Availability, and Serviceability (RAS) features. On the capacity front, HBM4 supports stack configurations ranging from 4-high to 16-high, with DRAM die densities of 24Gb or 32Gb. This allows for cube capacities as high as 64GB using 32Gb 16-high stacks, enabling higher memory density for demanding workloads.

A notable architectural change in HBM4 is the separation of command and data buses, designed to enhance concurrency and reduce latency. This modification aims to improve performance in multi-channel operations, which are prevalent in AI and HPC workloads. Furthermore, HBM4 incorporates a new physical interface and signal integrity improvements to support faster data rates and greater channel efficiency.​

The development of HBM4 involved collaboration among major industry players, including Samsung, Micron, and SK hynix, who contributed to the standard's formulation. These companies are anticipated to commence showcasing HBM4-compatible products in the near future, with Samsung indicating plans to begin production by 2025 to meet the growing demand from AI chipmakers and hyperscalers.​

As AI models and HPC applications demand greater computational resources, there is a growing technical need for memory with higher bandwidth and larger capacity. The introduction of the HBM4 standard should address these requirements by outlining specifications for next-generation memory technologies designed to handle the data throughput and processing challenges associated with these workloads.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis and reviews in your feeds. Make sure to click the Follow button.

Read Entire Article