Found 3414 Results

Cadence to Acquire Rambus PHY IP Assets

https://www.rambus.com/cadence/

Rambus Completes Sale of PHY IP Assets to Cadence On September 7, 2023, Rambus Inc. (Nasdaq: RMBS), announced the completion of the previously announced sale of its PHY IP business to Cadence Design Systems, Inc. This site provides additional information on the transaction and related documents. Both companies are committed to the ongoing success of our […]

Memory Systems for AI in the Data Center

https://go.rambus.com/memory-systems-for-ai-in-the-data-center#new_tab

Developments in generative AI and Large Language Models are moving at a lightning pace. Incredible amounts of data must be processed and moved to train models which at their largest now exceed one trillion parameters. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he discusses the memory and interface technologies critical to powering next-generation computing […]

CXL Technology: Revolutionizing the Data Center

https://go.rambus.com/cxl-technology-revolutionizing-the-data-center#new_tab

The need for more memory bandwidth and capacity continues to rise, with applications like generative AI pushing current data center infrastructure to the limit. The leading companies at every level of the data center value chain are coalescing around CXL technology as a path to revolutionize the data center. Join Mark Orthodoxou to hear how […]

Innovations in CXL 3.0: Novel Device Types, Capabilities, and Interconnects

https://go.rambus.com/innovations-in-cxl-3-novel-device-types-capabilities-and-interconnects#new_tab

CXL 3.0 introduces several compelling new features to address the rapidly evolving demands of future data centers. A new device type, CXL Multi-Headed Devices, has been introduced to support simultaneous connection to multiple hosts. CXL Dynamic Capacity Device (DCD) capability simplifies migration of memory resources between hosts. New CXL Fabrics offer substantial scale and flexibility […]

LPDDR5X: Delivering High Bandwidth and Power Efficiency

https://go.rambus.com/lpddr5x-delivering-high-bandwidth-and-power-efficiency#new_tab

The bandwidth and low power characteristics of LPDDR make it an increasingly attractive choice of memory for applications in IoT, automotive, and edge computing. LPDDR5X takes performance to the next level with a data rate of up to 8.5 Gbps. Join Vinitha Seevaratnam to learn which applications can benefit from using LPDDR memory.

System Level Design Considerations for PCIe 6.0

https://go.rambus.com/system-level-design-considerations-for-pcie6#new_tab

PCIe 6.0 offers many new and exciting features including a 64 GT/s data rate, PAM4 signaling, forward error correction, and a low power L0p mode. In this presentation, Lou Ternullo will walk you through all the system design considerations you will need to know before getting started on your PCIe 6.0 design, including how to […]

Leveraging VESA Video Compression & MIPI DSI-2 for High-Performance Displays

https://go.rambus.com/leveraging-vesa-video-compression-and-mipi-dsi2-for-high-performance-displays#new_tab

Visually lossless video compression is essential for handling the growing bandwidth requirements of cutting-edge displays with higher resolutions, faster refresh rates, and greater pixel depths. This presentation will show designers how they can develop cutting-edge display products without compromising on display quality, battery life or cost using a combination of VESA video compression and MIPI […]

Meeting the Needs of Generative AI Training with HBM3

https://go.rambus.com/meeting-the-needs-of-generative-ai-training-with-hbm3#new_tab

Generative AI training models are growing in both size and sophistication at a lightning pace, requiring more and more bandwidth. With its unique 2.5D/3D architecture, HBM3 can deliver Terrabytes per second of bandwidth at a system level. Join Frank Ferro to hear how HBM helps designers address the needs of state-of-the-art AI training models.

Powering AI/ML Inference with GDDR6 Memory

https://go.rambus.com/powering-ai-ml-inference-with-gddr6-memory#new_tab

GDDR6 memory offers an impressive combination of bandwidth, capacity, latency and power. Frank Ferro will discuss how these features make it the ideal memory choice for AI/ML inference at the edge and highlight some of the key design considerations you need to keep in mind when implementing GDDR6 memory at ultra-high data rates.

What’s Next for DDR5 Memory?

https://go.rambus.com/whats-next-for-ddr5-memory#new_tab

With the industry now firmly on the path to enabling the next generation of servers with DDR5 memory, this presentation will look at what’s next in the DDR5 journey. Hear from John Eble on how DDR5 will scale to advanced performance levels, be deployed in new applications beyond RDIMMs, and how it is tailored for […]

Rambus logo