ISC’12, Hamburg, Germany – June 18, 2012 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced support for the new generation of NVIDIA GPUDirect™ technology. NVIDIA GPUDirect technology dramatically accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox’s scalable HPC adapters and NVIDIA GPUs, without transferring data to the CPU or server memory subsystem.
Without GPUDirect, GPU data must first be copied into the system memory before going over the network. Now, the interconnect and GPU are tightly linked with data exchanges directly between the two devices over a PCI Express 3.0 bus, completely bypassing the CPU and the system memory. This direct-connected approach enabled by GPUDirect delivers faster communications between GPUs across different systems by as much as 80 percent, while reducing the end-to-end latency.
“The high performance and compute density of GPUs have made them a compelling solution for computationally intensive HPC applications,” said Gilad Shainer, vice president of market development at Mellanox Technologies. “To ensure the highest level of application performance, scalability and efficiency, the communication between GPUs within a cluster must be performed as quickly as possible. GPUDirect enables NVIDIA GPUs and Mellanox ConnectX®-3 adapters to provide an optimum GPU clustering technology.”
“Mellanox’s support for GPUDirect helps users maximize their cluster performance,” said Sumit Gupta, senior director of the Tesla business at NVIDIA. “The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before.”
GPU-based clusters are widely used for computationally-intensive tasks, such as seismic processing, computation fluid dynamics and molecular dynamics. Since the GPUs perform high-performance floating point operations over a very large number of cores, a high-speed interconnect is required to connect between the platforms to deliver the necessary bandwidth and latency for the clustered GPUs to operate efficiently and alleviate any bottlenecks in the GPU-to-GPU communication path.
Mellanox ConnectX based adapters are the world’s only InfiniBand solutions that provide full offloading capabilities critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters. Combined with the availability of NVIDIA GPUDirect, Mellanox InfiniBand solutions are driving HPC environments to new levels of performance and scalability.
- Learn more about Mellanox FDR 56Gb/s InfiniBand adapters
- Learn more about GPUDirect with Mellanox InfiniBand: Mellanox GPUDirect
- Follow Mellanox on Twitter and Facebook
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.