This episode of “Ask the Experts” features a discussion on High Bandwidth Memory (HBM) with memory experts Frank Ferro and Nidish Kamath. The conversation focused on the role of HBM in today’s computing landscape, particularly for data center, AI, and High-Performance Computing (HPC) applications.
The experts highlighted the advantages of HBM3E, including higher memory bandwidth, higher capacity in a compact form factor, and improved power efficiency. They also discussed some of the challenges of implementing HBM, such as managing the complexity of data parallelism at higher speeds, its unique 2.5D architecture, and thermal management.
The interview concluded with a discussion on how Cadence and Rambus work together to deliver complete HBM3E memory subsystem solutions for customers.
Watch the full video interview to hear the details or skip below to read the key takeaways.
Experts
- Frank Ferro, Group Director of Memory and Storage IP, Cadence
- Nidish Kamath, Director of Product Management, Rambus
Key Takeaways
- HBM’s Crucial Role: The growth of AI is placing new demands on computing infrastructures, particularly in terms of performance and efficiency. HBM has quickly become a crucial element in meeting these requirements, especially for GPUs and AI accelerators.
- HBM Specification Evolution: The rapid evolution in the HBM specification in recent years has been driven by the phenomenal growth in data, particularly in AI training models. HBM3E offers high memory bandwidth performance, which is needed to train today’s large language models.
- HBM vs DDR Memory: HBM has three advantages over traditional DDR memory: higher memory bandwidth, higher capacity in a compact form factor, and improved power efficiency. HBM3E provides a maximum bandwidth of up to 1.2 Terabytes per second per HBM3E memory device.
- HBM3E Implementation Challenges: Managing the complexity of data parallelism at higher speeds is important for the controller. Implementing HBM3E also presents challenges at the physical layer as HBM requires a silicon interposer, and many designers are unfamiliar with this 2.5D architecture.
- Cadence-Rambus Collaboration: Cadence and Rambus have experience working together to leverage their respective areas of expertise to deliver HBM3E memory subsystems for customers. Cadence focuses on the physical layer, while Rambus designs memory controllers that work seamlessly with Cadence PHYs.
Key Quote
HBM has three key advantages: higher memory bandwidth, higher capacity in a compact form factor, and improved power efficiency. HBM3E has a per pin maximum of 9.6 Gigabits per second. Up to 1024 of these high-speed IOs provide data connectivity to the 12 or more stacked DRAM chips in the 3D package and provide a maximum bandwidth of up to 1.2 Terabytes per second.
Related Content
Leave a Reply