Pkg. Type: | FCCSP |
Pkg. Code: | AVG240 |
Lead Count (#): | 240 |
Pkg. Dimensions (mm): | 13.5 x 8.7 x 0.9 |
Pitch (mm): | 0.65 |
Moisture Sensitivity Level (MSL) | 3 |
Pb (Lead) Free | Yes |
ECCN (US) | |
HTS (US) |
Pkg. Type | FCCSP |
Lead Count (#) | 240 |
Carrier Type | Reel |
Moisture Sensitivity Level (MSL) | 3 |
Qty. per Reel (#) | 3000 |
Qty. per Carrier (#) | 0 |
Pb (Lead) Free | Yes |
Pb Free Category | e1 SnAgCu |
Temp. Range (°C) | 0 to 70°C |
Function | DDR5 Gen 1 MRDIMM MRCD |
Length (mm) | 13.5 |
MOQ | 3000 |
Pitch (mm) | 0.65 |
Pkg. Dimensions (mm) | 13.5 x 8.7 x 0.9 |
Published | No |
Reel Size (in) | 13 |
Tape & Reel | Yes |
Thickness (mm) | 0.9 |
Width (mm) | 8.7 |
The RG5R188 (MRCD) is a Gen 1 MRDIMM Registering Clock Driver for MRDIMM. Its primary function is to buffer the Command/Address (CA) bus, chip selects, and clock between the host controller and the DRAMs. It also creates a Buffer Command bus (BCOM) which control the data buffers. Each DDR5 MRDIMM 8800 MT/s would require 1 MRCD chip.
The Buffer Command bus is designed to enhance the performance of DDR5 MRDIMMs by managing the data flow between the host and the memory modules. This bus helps in optimizing signal integrity and achieving higher bandwidth per watt, which is particularly beneficial for applications such as AI training and inference workloads, high-performance computing (HPC), in-memory databases, and large language models (LLMs).
The Multiplexing Registering Clock Driver (MRCD) enhances the standard registering clock driver by processing an interleaved stream of DRAM commands at twice the typical RDIMM rate. It deinterleaves this command data stream and accurately directs it to the appropriate rank-specific outputs.
The Renesas MRCD (RG5R188) is compatible with DDR5 MRDIMM 8800 MT/s and it supports Intel Xeon 6 CPU’s available since 2024 and is qualified with multiple memory vendors. DDR5 MRDIMMs 8800 MT/s with Renesas MRCD (RG5R188) provides 39% more bandwidth and the MDB plays a key role in increasing Performance per watt for AI training and inference workloads.