ConnectX®-4 Single/Dual-Port Adapter supporting 100Gb/s with VPI

ConnectX®-4 adapter with Virtual Protocol Interconnect (VPI), supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.

With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX®-4 provides an unmatched combination of 100Gb/s bandwidth, sub microsecond latency and 150 million messages per second. It includes native hardware support for RDMA over InfiniBand and Ethernet, Ethernet stateless offload engines, GPUDirect®, and Mellanox’s new Multi-Host Technology.

Multi-Host enables connecting multiple hosts into a single interconnect adapter by separating the ConnectX®-4 PCIe interface into multiple and separate interfaces. Each interface can be connected to a separate host with no performance degradation. ConnectX®-4 offers four fully-independent PCIe buses, lowering total cost of ownership in the data center by reducing CAPEX requirements from four cables, NICs, and switch ports to only one of each, and by reducing OPEX by cutting down on switch port management and overall power usage.

  • High performing silicon for applications requiring high bandwidth, low latency and high message rate
  • World-class cluster, network, and storage performance
  • Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency
  • Scalability to tens-of-thousands of nodes
  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port
  • 10/25/40/50/56/100Gb/s speeds
  • 150M messages/second
  • Multi-Host technology — connectivity to up-to 4 independent hosts
  • Single and dual-port options available
  • Erasure Coding offload
  • T10-DIF Signature Handover
  • Virtual Protocol Interconnect (VPI)
  • Power8 CAPI support
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirect™ communication acceleration
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Ethernet encapsulation (EoIB)
  • RoHS-R6

Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.