Earlier this month, Semiconductor Engineering’s Ann Steffora Mutschler penned an article that takes a closer look at how buffering is gaining ground as a way to speed up the processing of increasingly large quantities of data. In simple terms, says Mutschler, a data buffer is an area of physical memory storage that temporarily stores data while it is being moved from one place to another.
“This becomes increasingly necessary in data centers, autonomous vehicles and for machine learning applications,” she explains. “The challenges with these applications are advanced signal equalization and increased capacity and bandwidth. Data buffering techniques — either as a discrete chip within the memory module or integrated into an SoC — help make this all possible.”
As Mutschler notes, buffer chips manage read and write commands from the CPU to the DRAM. When buffer chips are used, it is known as a load reduced dual inline memory module (LRDIMM). Those chips boost total memory capacity at the same performance and speed.
“As data is growing at exponential rates, reducing CPU load through data buffer chips becomes necessary to increase DRAM capacity per CPU,” Victor Cai, director of product marketing for Rambus‘ Memory and Interfaces Division, told the publication. “LRDIMMs are common in data centers that are performing data-intensive applications, such as big data analytics, artificial intelligence and machine learning.”
As Cai noted in a previous Semiconductor Engineering article, DDR3 server DIMM chipsets (800 Mbps) first hit the market in 2006 and began to ramp the following year. By the time DDR4 server DIMM chipsets (2133) began shipping in 2014, DDR3 server DIMM chipsets were spanning the following five speeds: 800, 1066, 1333, 1600 and 1866. In the last years, DDR4 buffer chipset shipments have crossed over in term of volume, with DDR4 chipset speeds expected to reach 3200 by 2018 or 2019.
In addition to increased bandwidth and density, DDR4 buffer chips offer data centers the critical benefits of advanced signal equalization. For example, Rambus registered clock driver (RCD) and data buffer (DB) chips are equipped with sophisticated decision feedback equalizers (DFEs) which are typically used in the communication industry for 10G or higher SerDes. Essentially, DFE circuits allow RCDs and DBs to operate with greater margin on typical systems – as well as in challenging environments where older chipsets are prone to malfunction or fail to achieve higher speeds.
Equipping both chips (RCD and DB) with DFE capabilities allows the DIMM chipset to push the limits of both the command address bus as well as the DQ bus. Moreover, DFE circuits enable DIMMs to run at speeds of 3200, exceeding original industry expectations for this generation of buffer chips. It is also important to emphasize that signal integrity challenges are becoming more prevalent as bandwidth, density and speeds increase. Fortunately, lessons learned from other markets (such as communications) will merge with DDR technology to help push the single-ended bus to its limit.
From our perspective, memory buffer chips will continue to steadily evolve, as is illustrated by the number of technical advances (such as DFE) seen from one generation of silicon to the next. The importance of server DIMM chipsets will only continue to increase, as the industry attempts to sustain Moore’s Law and maximize the limitations of Von Neumann architecture in a changing data center.
Interested in learning more about data buffering? The full text of “Data Buffering’s Role Grows” by Ann Steffora Mutschler can be read on Semiconductor Engineering here.