Frank Ferro, senior director of product marketing at Rambus, recently told SemiconductorEngineering’s Ed Sperling that he was looking forward to seeing what the company could do for next-gen DDR5 as well as evolving high-bandwidth memory (HBM) interfaces.
“The goal is to start to bring the power down through things like better signaling technology for DDR5,” he explained.
According to Ferro, adopting low-swing signaling will save power on the interface. Along with low-swing signaling, additional power can be saved by bringing some control back on the PHY, allowing the memory to assume more of a supporting role. This will enable the PHY to rapidly bring the memory in and out of low power states.
“Looking at HBM, one of the things about that standard is that you’re now going through a silicon interposer,” he said.
In the HBM configuration, says Ferro, your memory channel is much shorter, eliminating the need to drive long signals across a PCB or into a DIMM socket.
“You’re driving a very short channel through a silicon interposer. That gives the opportunity to make that interface more power efficient on a per bit basis,” he continued. “While the overall power might be higher because of the number of bits, the power per bit should be a bit more efficient.”
More specifically, says Ferro, HBM DRAM increases memory bandwidth by providing a very wide interface to the SoC of 1024 bits.
“The maximum speed for HBM2 is 2Gbits/s for a total bandwidth of 256Gbytes/s,” he confirmed in a recent SemiconductorEngineering article. “Although the bit rate is similar to DDR3 at 2.1Gbps, which accounts for the bulk of memory used in DIMMs today, the eight 128-bit channels give HBM about 15 times more bandwidth.”
As expected, notes Ferro, mass-market deployment of HBM will undoubtedly present the industry with a number of challenges.
“2.5D technology adds manufacturing complexities, [while] the silicon interposer adds cost,” he continued. “There are [also] a lot of expensive components being mounted to the interposer, including the SOC and multiple HBM die stacks, so good yield is very critical to make the system cost effective. [In addition], there is the challenge of routing thousands of signals (data + control + power/ground) via the interposer to the SOC for each HBM memory used.”
Nevertheless, Ferro concludes, even with the above-mentioned challenges, the advantage of having – for example – four HBM memory stacks, each with 256Gbyts/s in close proximity to the CPU, provides both a significant increase in memory density (up to 8Bb per HBM) and bandwidth when compared with existing architectures.
Interested in learning more about HBM? You can check out the full text of “How is your HBM memory?” on Semiconductor Engineering here.
Leave a Reply