This article is more than 1 year old

Mellanox SoCs it to NVMe over Fabrics with BlueField platform

The JBOF made easier

Mellanox has integrated the hardware needed to frontend an NVMe-over-Fabrics flash array into a single System-on-Chip (SoC) device, making it easier for shared flash storage system builders to put together NVMe JBOFs.

A JBOF is, like a JBOD, just a bunch of flash drives – NVMe SSDs in this case.

For them to be usable by apps in accessing servers across an NVMe fabric (NVMeF) requires the JBOF to have an NVMe-over-Fabrics frontend system.

Broadly speaking that means a CPU plus code, rNICS (RDMA Network Interface Cards) to interface to the RDMA InfiniBand, RoCE (RDMA over Converged Ethernet) or other links to the servers, some DRAM and cache, and a PCIe switch to link to the NVMe drives in the JBOF.

BlueField_deployment_diagram

BlueField schematic

Mellanox has integrated the components for InfiniBand and RoCE server links onto its BlueField SoC. It says it has a fast mesh internal fabric and NVMeF data traffic can go directly from SSD to NIC (or NIC to SSD) without interrupting the CPU.

BlueField components are:

  • ConnectX-5 high-speed NIC (up to 2x100Gb/s ports, Ethernet or InfiniBand)
  • Up to 16 ARM A72 (64-bit) CPU cores
  • PCIe switch (32 lanes at Gen3/Gen4)
  • DDR4 DRAM controller and coherent cache
BlueField_SoC

Mellanox BlueField SoC layout

Mellanox created a BlueField Storage Reference Platform that can handle NVMe SSDs and serve them up across NVMe fabrics using its BlueField SoC as a development and reference platform. It's working with OEM and ODM partners to get BlueField in use.

Attala is also working in the JBOF NVMeF frontend area, as are Kazaan and CNEX Labs. ®

More about

TIP US OFF

Send us news


Other stories you might like