Mellanox ConnectX-2 Virtual Protocol Interconnect Adapter Card Delivers Unmatched Flexibility to Next-Generation Virtualized and Cloud Data Centers

Multi-Protocol Interconnect Adapter Supports both 40Gb/s InfiniBand and 10 Gigabit Ethernet Enabling I/O Infrastructure Agility

VMWORLD 2009, SAN FRANCISCO, CA – September 1, 2009 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of end-to-end connectivity solutions for data center servers and storage, today announced availability of its ConnectX-2 Virtual Protocol Interconnect™ (VPI) adapter card, providing fabric flexibility with InfiniBand and Ethernet connectivity, while expanding the performance potential of applications in data center, high-performance computing , and embedded environments. ConnectX-2 VPI’s unified I/O technology provides a one-wire solution for any networking, clustering, storage, and management application with an enhanced quality of service to deliver high application productivity. 

The ConnectX-2 VPI adapter card includes both a 40Gb/s InfiniBand QSFP port and a 10 Gigabit Ethernet SFP+ port, and enables users with both technologies, in a dense solution, to save server real-estate, reduce power consumption and consolidate their networking infrastructure. Server and storage providers will benefit from this most flexible solution with reduced qualification and certification costs. 

“With its low-power consumption and fabric flexibility, ConnectX-2 VPI adapters simplify I/O system design and lower the cost for IT managers to deploy infrastructure that meets the challenges of a dynamic data center,” said John Monson, vice president of marketing at Mellanox Technologies. “ConnectX-2 VPI combines superior interconnect bandwidth and latency performance with I/O infrastructure agility to provide a robust connectivity solution for data centers and high-performance, high-transactional computing environments.”

“This technology allows us to provide greater high-performance computing resources to researchers in our national security programs by simplifying the design, and lowering the cost and power requirements of our scalable units for scientific simulation clusters,”  said Mark Seager, Livermore Computing Asst Dept Head for Advanced Technology, at Lawrence Livermore National Laboratory. “In addition, these new adapters enable higher Lustre file system performance with greater connection flexibility between the InfiniBand cluster interconnect and our 10 Gigabit Ethernet storage area network.”

ConnectX-2 delivers up to 30% less power consumption than its predecessor, helping data centers to lower their power and cooling costs for server and storage I/O. ConnectX-2, with its integrated NIC and PHY, provides additional cost and power saving by minimizing board real-estate. The InfiniBand port delivers the highest bandwidth and lowest latency available to high-performance and transaction-sensitive applications – up to 40Gb/s of bandwidth with latencies as low as 1 microsecond. The Ethernet port delivers 10Gb/s bandwidth with 6 microsecond TCP latency or 3 microsecond RDMA latency and kernel bypass for Low-Latency Ethernet (LLE) environments.

Software Support
ConnectX-2 VPI adapters are compatible with TCP/IP and OpenFabrics-based RDMA protocols and software, InfiniBand and cluster management software available from OEMs, and major operating system distributions.

ConnectX-2 VPI samples are available today with general availability in October. Pricing is available upon request.

About Mellanox
Mellanox Technologies is a leading supplier of end-to-end connectivity solutions for servers and storage that optimize data center performance. Mellanox products deliver market-leading bandwidth, performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution. For the best in performance and scalability, Mellanox is the choice for Fortune 500 data centers and the world’s most powerful supercomputers. Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel. For more information, visit Mellanox at

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.