Semiconductor Engineering editor in chief Ed Sperling recently noted that data center architecture has experienced very few radical changes since the commercial introduction of the IBM System/360 mainframe way back in 1964.
“There have been incremental improvements in speed and throughput over the years, with a move to a client/server model in the 1990s, but from a high level this is still an environment where data is processed and stored centrally and accessed globally,” Sperling wrote.
“[However], as massive amounts of data from smart devices and the Internet of Things (IoT) begins to flood into data centers, it’s becoming apparent that even more fundamental changes will be required.”
As Sperling points out, the IoT has grabbed headlines primarily in the consumer world, with products such as smart watches, Nest thermostats and smart TVs.
“[Nevertheless], the biggest changes will happen on the big data side—how data is collected, used and processed and stored,” he opined. “Computing architectures have always been about solving bottlenecks and nowhere is this more pressing than in the data center.”
According to Loren Shalinsky, a strategic development director at Rambus, data centers have perpetually coveted increased bandwidth, although the industry is now seeing a range of newer memory technologies specifically designed to address the issue.
“We’ve got HBM (high-bandwidth memory) and HMC (Hybrid Memory Cube) and it’s not clear how they will fit into the general processor server. We’re looking at this kind of memory like L4 cache,” he explained.
“If you go back to pre-Pentium days, L3 cache was off chip. Then, over time it was integrated into the processor. The next step was DRAM.”
The current lineup of new memory, says Shalinsky, can best be described as an in-between step.
“There are aspects that are much lower in power,” he continued. “The power it takes to drive a signal through a PCB and through a connector to memory is a lot more than HBM, which is 1,024 bits wide or HMC, which is in the same range.”
There are also a number of other memory categories hitting the market, including magnetic RAM (Magnetoresistive RAM) and resistive RAM (RRAM or ReRAM), which fit somewhere between flash and DRAM.
“This is a continued evolution of an architecture that has not been fundamentally different for the past 30 or 40 years, but every time they come up with a new tier there is an order of magnitude difference,” he added.
“Now they’re squeezing in more middle opportunities. We’re also dealing with larger files. Big database companies are pushing more memory because the database can now sit in memory itself to speed it up.”
Interested in learning more? The full text of Ed Sperling’s “Rethinking The Cloud” is available on SemiEngineering here.
Leave a Reply