JEDEC recently updated its JESD235 High Bandwidth Memory (HBM) DRAM standard. As we’ve previously discussed on Rambus Press, HBM DRAM supports a wide range of use cases and verticals including graphics (GPUs), high performance computing (HPC), servers, networking and client applications. Indeed, HBM is particularly suitable for hardware that demands peak bandwidth, bandwidth per watt and capacity per area.
Read first our primer on:
HBM2E Implementation & Selection – The Ultimate Guide »
Densities up to 24GB, speeds hit 307 GB/s
According to JEDEC, the updated HBM standard leverages wide I/O and TSV technologies to support densities up to 24 GB per device – at speeds up to 307 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. In practical terms, this means the standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth, thereby enabling system flexibility for capacity requirements from 1 GB – 24 GB per stack.
Per pin bandwidth hits 2.4 Gbps
The updated HBM standard also extends per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, while refreshing the MISR polynomial options to support the new configurations. As JEDEC notes, the HBM standard was developed and updated with support from leading GPU and CPU developers to extend the system bandwidth growth curve beyond levels supported by traditional discrete packaged memory.
From 8 Gb-per-layer to 16 Gb-per-layer
“This update was really focused on extending the support and the design from an 8 Gb-per-layer definition to a 16 Gb-per-layer,” Barry Wagner, HBM task group chair for JEDEC, told the EE Times. “A lot of the demand for the high capacity is driven by very large data-set–type applications for high-performance computing.”
Wagner also told the publication that the recent update would add density to HBM2 before HBM3 rolled out to power a new generation of applications and devices. To be sure, work is already underway to develop the new HBM3 standard, with an emphasis on increasing bandwidth and density, as well as improving performance per watt.
Let’s talk about HBM
High-bandwidth memory (HBM) enables system design with lower power consumption per I/O, as well as higher bandwidth memory access in a condensed form factor. This paradigm is achieved by stacking memory dies on top of each other, while sharing the same package as an SoC or GPU core using a silicon interposer. Essentially, HBM takes existing DRAM with 2.5D technology and moves it closer to the processor using a fatter data pipe. This effectively accelerates data throughput, while reducing the amount of power required to drive a signal and cutting RC delay.
Perhaps not surprisingly, HBM was initially perceived by GPU companies as a clear step in the evolutionary direction of graphics-specific memory. However, the networking and data center industries soon realized HBM could add a new tier of memory to the hierarchy – while delivering more bandwidth for applications that demanded faster access, as well as lower latency and power consumption. From our perspective, HBM enables data center solution developers to bring high performance memory closer to the CPU, thereby reducing latency and improving system throughput.
Interested in learning more about HBM? You can check out our HBM PHY product page here.
Leave a Reply