Home > Interface IP > PCI Express Controller IP > PCIe 3.1 Controller
The PCIe 3.1 Controller (formerly XpressRICH) is designed to achieve maximum PCI Express (PCIe) 3.1 performance with great design flexibility and ease of integration. It is fully compatible with the PCIe 3.1/3.0 specification. A PCIe 3.1 Controller with AXI (formerly XpressRICH-AXI) is also available. The controller delivers high-bandwidth and low-latency connectivity for demanding applications in data center, edge and graphics.
The PCIe 3.1 Controller is configurable and scalable IP designed for ASIC and FPGA implementation. It supports the PCIe 3.1/3.0 specifications, as well as the PHY Interface for PCI Express (PIPE) specification. The IP can be configured to support endpoint, root port, switch port, and dual-mode topologies, allowing for a variety of use models.
The provided Graphical User Interface (GUI) Wizard allows designers to tailor the IP to their exact requirements, by enabling, disabling, and adjusting a vast array of parameters, including data path size, PIPE interface width, low power support, SR-IOV, ECC, AER, etc. for optimal throughput, latency, size and power.
The PCIe 3.1 Controller is verified using multiple PCIe VIPs and test suites, and is silicon proven in hundreds of designs in production. Rambus integrates and validates the PCIe 3.1 Controller with the customer’s choice of 3rd-party PCIe 3.1 PHY.
The PCI Express® (PCIe®) interface is the critical backbone that moves data at high bandwidth and low latency between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. With the rapid rise in bandwidth demands of advanced workloads such as AI/ML training, PCIe 6.1 jumps signaling to 64 GT/s with some of the biggest changes yet in the standard.
PCI Express layer
User Interface layer
Unique Features & Capabilities
PCI Express layer
AMBA AXI layer
Data engines
IP files
Documentation
PCI Express Bus Functional Model
Software
Reference Designs
Advanced Design Integration Services:
The PCIe interface is the critical backbone that moves data at high bandwidth and low latency between various compute nodes such as CPUs, GPUs, FPGAs, and workload-specific accelerators. With the torrid rise in bandwidth demands of advanced workloads such as AI/ML training, PCIe 6.0 jumps signaling to 64 GT/s with some of the biggest changes yet in the standard.