InfiniBand Leads TOP500 as Most Used Interconnect

Number of FDR InfiniBand-based systems increases 10X versus previous Nov. 2011 TOP500 list

ISC’12, Hamburg, Germany – June 18, 2012 Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of end-to-end interconnect solutions for data center servers and storage systems, today announced that the company continued its commanding lead as the global interconnect solution provider for the TOP500 list of supercomputers with InfiniBand now leading as the top interconnect of choice for the TOP500 with 210 clusters. From June 2011 to June 2012, the total number of InfiniBand-connected CPU cores on the TOP500 list grew 44 percent, 86 percent of accelerator-based systems are connected with InfiniBand, and the amount of InfiniBand-based system performance grew 70 percent. This growth highlights the increasing demand for InfiniBand as a way to maximize computing and accelerator resources, productivity and scalable performance in the world’s fastest computer systems.

Mellanox FDR 56Gb/s InfiniBand connects the highest ranking InfiniBand-based cluster on the list, demonstrating extremely high compute and storage clustering efficiency of more than 93 percent and the highest performance per node. The number of Mellanox FDR InfiniBand connected systems increased 10X versus the November 2011 list with 21 systems, and 10 percent of InfiniBand systems on the TOP500.

Mellanox’s scalable InfiniBand interconnect solutions are also the most used interconnect for Petascale systems, and deliver the best power-efficient Petascale system by achieving Petascale performance in the most cost-effective way.

Mellanox ConnectX® InfiniBand adapters and switch systems optimize server and storage performance and provide the scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputers, representing 40 percent of the PetaScale systems (eight systems). InfiniBand connects the majority of the TOP100 with 48 percent (48 systems), the TOP200 with 55.5 percent (111 systems), the TOP300 with 52.7 percent (158 systems), and the TOP400 with 46.5 percent (188 systems) versus Ethernet or any other technology.

The advanced offloads and accelerations within Mellanox InfiniBand solutions enable the most performance efficient system on the TOP500 list, at nearly 96 percent system and CPU efficiency. At 80 percent system and CPU efficiency, Mellanox’s 10GbE NICs deliver the most efficient Ethernet-based clusters on the TOP500.

“InfiniBand becoming the most used interconnect on the TOP500 is a significant milestone and achievement for Mellanox. We believe InfiniBand surpassing Ethernet in high-performance computing is a forward-looking sign that it will also become the interconnect of choice for cloud and Web 2.0 data centers, as they are all based on similar architecture concepts,” said Eyal Waldman, president, chairman and CEO of Mellanox Technologies. “With the majority of the world’s Petaflop systems, as well as the top two most efficient systems on the list, Mellanox FDR 56Gb/s InfiniBand and 10/40GbE interconnect solutions with PCI Express 3.0 provide the best return-on-investment with leading system efficiency without sacrificing performance.”

Published twice a year and publicly available at, the TOP500 list ranks the world’s most powerful computer systems according to the Linpack benchmark rating system.

Highlights of InfiniBand usage on the June 2012 TOP500 list include:

  • Mellanox InfiniBand provides the highest system utilization on the TOP500, up to 96 percent system utilization, and connects 25 of the TOP30, 8 out of the TOP10 including the top two most efficient systems.
  • InfiniBand is the most used interconnect in the TOP500: 47 percent of the TOP100, 55.5 percent of the TOP200, 52.7 percent of the TOP300, 46.5 percent of the TOP400 and 42 percent of the TOP500.
  • InfiniBand FDR is the fastest growing interconnect technology on the TOP500 with a 10X increase in number of systems versus six months ago.
  • InfiniBand connects 40 percent of the world’s most powerful Petaflop systems on the list.
  • InfiniBand connects 7X the number of Cray-based systems in the TOP500 systems and 3X the number of Cray-based systems in the TOP100 systems.
  • Clusters continue to be the dominant system architecture with 81 percent of the TOP500 list.
  • Mellanox end-to-end InfiniBand scalable HPC solutions accelerate 86 percent of the accelerator-based systems.
  • Mellanox InfiniBand interconnect solutions present in the TOP500 are used by a diverse list of applications, from large-scale, high-performance computing to commercial technical computing and enterprise data centers.

Supporting Resources:

About Mellanox
Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at

Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.