Skip to main content

Package Information

CAD Model: View CAD Model
Pkg. Type: FCCSP
Pkg. Code: AVG78
Lead Count (#): 78
Pkg. Dimensions (mm): 7.8 x 3.23 x 0.9
Pitch (mm): 0.5

Environmental & Export Classifications

Moisture Sensitivity Level (MSL) 3
Pb (Lead) Free Yes
ECCN (US)
HTS (US)

Product Attributes

Pkg. Type FCCSP
Lead Count (#) 78
Carrier Type Reel
Moisture Sensitivity Level (MSL) 3
Qty. per Reel (#) 4000
Qty. per Carrier (#) 0
Pb (Lead) Free Yes
Pb Free Category e3 Sn
Temp. Range (°C) 0 to 70°C
Function DDR5 Gen 1 MRDIMM MDB
Length (mm) 7.8
MOQ 4000
Pitch (mm) 0.5
Pkg. Dimensions (mm) 7.8 x 3.23 x 0.9
Published No
Reel Size (in) 13
Tape & Reel Yes
Thickness (mm) 0.9
Width (mm) 3.23

Description

The RG5D188 (MDB) is a Gen 1 MRDIMM Data Buffer. Its primary function is to demultiplex and buffer data from the host CPU to DRAMs. Each DDR5 MRDIMM 8800 MT/s would require 10 MDB chips to multiplex the memory channels. It has two 4-bit data interfaces to the host, running at twice the speed of the DRAM interfaces. Each host interface multiplexes two pseudo-channels, both of which have a separate 4-bit DRAM interface. The RG5D188 supports x4 or x8 DRAMs. It also has an input-only control bus interface that is connected to an MRCD, as well as a dedicated pin for ZQ Calibration and loopback outputs for test purposes.

The main benefit of the Multiplexing Data Buffer (MDB) in an MRDIMM (Multiplexed Rank DIMM) is its ability to enhance memory bandwidth and reduce power consumption. The MDB achieves this by converting a 16-bit DRAM interface running at native DRAM speed into an 8-bit host interface operating at twice the speed. This process of multiplexing and demultiplexing allows for higher data transfer rates.

The Renesas MDB (RG5D188) is compatible with DDR5 MRDIMM 8800 MT/s and it supports Intel Xeon 6 CPU’s available since 2024 and is qualified with multiple memory vendors. DDR5 MRDIMMs 8800 MT/s with Renesas MDB (RG5D188) provides 39% more bandwidth and the MDB plays a key role in increasing performance per watt for AI training and inference workloads.