Ultra-Low 1 Microsecond Application Latency and 20Gb/s Bandwidth Sets the Bar for High-Performance Computing, Data Center Agility, and Extreme Transaction Processing
SANTA CLARA, CA and YOKNEAM, ISRAEL – March 26, 2007 – Mellanox™ Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of semiconductor-based high-performance interconnect products, today announced the availability of the industry’s only 10 and 20Gb/s InfiniBand I/O adapters that deliver ultra-low 1 microsecond (µs) application latencies. The ConnectX IB fourth-generation InfiniBand Host Channel Adapters (HCAs) provide unparalleled I/O connectivity performance for servers, storage, and embedded systems optimized for high throughput and latency-sensitive clusters, grids and virtualized environments.
“Today’s servers integrate multiple dual and quad-core processors with high bandwidth memory subsystems, yet the I/O limitations of Gigabit Ethernet and Fibre Channel effectively degrades the system’s overall performance,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “ConnectX IB 10 and 20Gb/s InfiniBand adapters balance I/O performance with powerful multi-core processors responsible for executing mission-critical functions that range from applications which optimize Fortune 500 business operations to those that enable the discovery of new disease treatments through medical and drug research.”
Building on the success of the widely deployed Mellanox InfiniHost adapter products, ConnectX IB HCAs extend InfiniBand’s value with new performance levels and capabilities.
- Leading performance: Industry’s only 10 and 20Gb/s I/O adapters with ultra-low 1µs RDMA write latency and 1.2µs MPI ping latency1, and a high uni-directional MPI message rate of 25 million messages-per-second2. The InfiniBand ports connect to the host processor through a PCI Express x8 interface.
- Extended network processing offload and optimized traffic and fabric management: New capabilities including hardware reliable multicast, enhanced atomic operations, hardware-based congestion control and granular quality of service.
- Increased TCP/IP application performance: Integrated stateless-offload engines alleviate the host processor from compute-intensive protocol stack processing which optimizes application execution efficiency.
- Higher Scalability: Scalable and reliable connected transport services and shared receive queues enhance scalability of high-performance applications to tens of thousands of nodes.
- Hardware-based I/O virtualization: Support for virtual services end-points, virtual address translation/DMA remapping, isolation and protection per virtual machine, and facilitating native InfiniBand performance to applications running in virtual servers for EDC agility and service oriented architectures (SOA).
Leading OEM Support
“Our high-performance BladeSystem c-Class customer applications are increasingly relying on lower interconnect latency to improve performance and keep costs in check,” said Mark Potter, vice president of the BladeSystem Division at HP. “With the promise of even better application latency, HP's c-Class blades featuring the forthcoming Mellanox ConnectX IB HCAs will further enhance HP's industry-leading 4X DDR InfiniBand capability, bringing new dimensions to how Fortune 500 companies deploy clusters and improve ROI.”
“Clearly InfiniBand is reaching market maturity with this fourth generation server host chip and adapter level interface technology from Mellanox,” said Bill Erdman, marketing director of Cisco Systems Server Virtualization Business Unit. “As we bring these host interface cards to market over the next several calendar quarters, as fully integrated with our scalable Server Fabric Switching product line, customers will see significant latency improvements, and greater end to end delivery reliability, especially when scaling large computing clusters with thousands of high end compute nodes.”
“Scaling high-performance applications and clusters without compromising performance is becoming a critical need, driven by ever-increasing computation needs,” said Andy Bechtolsheim, chief architect and senior vice president for Sun Microsystems. “ConnectX IB HCAs offer novel scalability features that complement our vision for delivering compelling solutions to our end users.”
“IT organizations in industries ranging from HPC to financial services are continually looking at ways to get the most out of their critical software applications,” said Patrick Guay, senior vice president of marketing at Voltaire. “The increased bandwidths and lower latencies delivered in Mellanox’s ConnectX InfiniBand adapters combined with Voltaire’s multi-service switching platforms will bring significantly greater application acceleration benefits to our customers.”
I/O as a Competitive Advantage
The performance and capabilities of ConnectX IB HCAs support the most demanding high-performance computing applications while at the same time reduce research and development budgets.
“Today’s science demands continue to outpace the number of available engineers and their associated budgets, driving the need for more productivity per scientist,” said Shawn Hansen, director of marketing, Windows Server Division at Microsoft Corporation. “Technologies that improve I/O latencies and message rates, like ConnectX IB adapters, enhance the ability of Windows Compute Cluster Server to deliver high performance computing for the mainstream researcher and engineer.”
In addition, the volume of transactions and data transferred in Fortune 500 companies is increasing exponentially, jeopardizing profits and competitiveness for IT infrastructures that cannot scale to address the additional load.
“Extremely high volumes of concurrent users and increasingly complex transactions are making access to data one of the greatest bottlenecks to performance in grid computing,” said Geva Perry, chief marketing officer at GigaSpaces. “ConnectX IB InfiniBand HCAs offer leading latency, throughput and reliable performance that can help eliminate interconnect-related data latency degradations and is therefore a perfect complement to GigaSpaces’ products for increasing overall application performance and scalability.”
Enhanced Virtual Infrastructure Performance and ROI
ConnectX IB InfiniBand HCAs offer Channel I/O Virtualization (CIOV), which creates virtualized services end-points for virtual machines and SOA deployments. CIOV enables virtualized provisioning of all I/O services including clustering, communications, storage and management. CIOV enables accelerated hardware-based I/O virtualization and is complementary to CPU and memory virtualization technologies from Intel and AMD.
“When used with the Xen virtualization technology inside of SUSE Linux Enterprise Real Time, ConnectX IB InfiniBand adapters can lower I/O costs and improve I/O utilization,” said Holger Dyroff, vice president of SUSE Linux Enterprise product management at Novell. “Service-oriented architectures demand native I/O performance from virtual machines and Mellanox’s I/O virtualization architecture perfectly complements Novell's technical leadership in delivering mission-critical operating systems to our customers.”
ConnectX IB InfiniBand HCAs deliver leading performance while maintaining compatibility with operating systems and networking software stacks. For high-performance remote direct memory access (RDMA) based operations, the adapters are fully backward compatible to the OpenFabrics (www.openfabrics.org) Enterprise Distribution (OFED) and Microsoft WHQL-certified Windows InfiniBand (WinIB) protocol stacks, requiring only a device driver upgrade. RDMA and InfiniBand hardware transport offload is proven to deliver software-transparent, application performance improvements. For traditional TCP/IP-based applications, the adapters support standard operating system stacks, including stateless-offload and Intel QuickData technology enhancements.
“PCI Express and Intel QuickData technology provide a low disruption path to scaling I/O by respectively increasing bandwidth and efficiencies for I/O in Intel-based servers,” said Jim Pappas, Director of Technology Initiatives for Intel’s Digital Enterprise Group. “With innovative implementation of these technologies by companies like Mellanox, I/O on Intel’s enterprise platforms continues to be accelerated for the demanding multi-core application needs of today and the future.”
Pricing and Availability
10K volume pricing for ConnectX IB HCA silicon adapters is $165 (dual-port 10Gb/s) and $215 (dual-port 10 or 20Gb/s). 10K volume pricing for ConnectX IB HCA adapter cards is $369 (dual-port 10Gb/s) and $479 (dual port 10 or 20Gb/s). ConnectX IB InfiniBand HCA silicon devices and PCI Express-based adapter cards are sampling today, and general availability is expected in the second quarter of 2007. Value-added adapter solutions from OEM channels are expected soon after.
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand interconnect products that facilitate data transmission between servers, communications infrastructure equipment, and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems. In addition to supporting InfiniBand, Mellanox's next generation of products support the industry-standard Ethernet interconnect specification.
Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information on Mellanox’s solutions, please visit www.mellanox.com.
- The performance data was measured with MVAPICH 0.9.7 MPI on Intel® quad-core Xeon™ 5300 series Bensley servers.
- 8-core, uni-directional MVAPICH 0.9.9 MPI message rate benchmark on Intel® quad-core Xeon™ 5300 series Bensley servers.
Mellanox is a registered trademark of Mellanox Technologies, and ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies. All other trademarks are property of their respective owners.
For more information: