SANTA CLARA, CA – March 6, 2006 – Mellanox™ Technologies Ltd, the leader in business and technical computing interconnects, announced the immediate availability of a 10Gb/s InfiniBand® adapter card priced at $125 for OEM volume purchase orders. The MHES14 is a single-port, high-bandwidth, low-latency, 4X InfiniBand host channel adapter (HCA) card that provides data centers with a cost-effective solution to consolidate communications, computing, management and storage traffic onto a single fabric.
A single InfiniBand HCA card in each server and storage node is the only I/O adapter required to interconnect a highly scalable and reliable grid, as opposed to several multi-port Enterprise Gigabit Ethernet NICs and Fibre Channel HBAs. InfiniBand I/O consolidation simplifies cabling, eases system management, eliminates unnecessary fabric infrastructure equipment, reduces power, and delivers optimal total cost of ownership (TCO).
I/O Consolidation and Virtualization with InfiniBand
Virtual infrastructure solutions, like those from industry leader VMware, when deployed over InfiniBand will facilitate off-the-shelf data center applications such as CRM, ERP, order processing, financial, payroll, inventory management, and others to run transparently while realizing the inherent benefits of I/O consolidation and performance increase of a high bandwidth, low-latency interconnect. As part of the VMware Community Source program, Mellanox is taking a leadership position in cooperative development of high performance virtual infrastructure solutions based on VMware ESX Server.
“InfiniBand’s ability to partition I/O to multiple end-points, and consolidate I/O across data center applications holds the promise for added flexibility and cost savings within VMware environments," said Bernie Mills, senior director of developer programs at VMware. “Mellanox has a clear commitment to delivering cost-effective virtual infrastructure solutions and has been actively involved in the VMware Community Source program since its inception. We continue to look forward to working with them in concert with other InfiniBand vendors within the community.”
Grid Computing with InfiniBand
Multi-tiered server architectures have been deployed in data centers to provide dedicated computing resources for fixed functions. As the demands on data centers increasingly fluctuate, this multi-tiered model has proven inefficient given the fact that IT managers spend more than 70% of their time on maintenance and resource allocation. Modern data centers are now opting for grid computing architectures, where applications such as web servers, middleware servers, and storage servers and systems can be dynamically deployed from a common shared pool of server and storage resources. Fabric solutions that support the most demanding applications are a requirement to share resources, ease manageability and lowering total cost of ownership.
“InfiniBand is architected to efficiently support multiple traffic channels on a single interface making it the ideal grid computing fabric that interconnects both server and native InfiniBand storage nodes,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “By offering the MHES14 InfiniBand HCA at Enterprise Gigabit Ethernet NIC prices and below Fibre Channel HBAs, Mellanox is removing the price barrier to data center I/O consolidation and easing rapid expansion of clustered computing and storage resources.”
Ecosystem Ready for Mass Deployment
With Linux® distributions supporting InfiniBand based on open source development from OpenIB.org, in addition to market-wide operating system support for Windows®, AIX™, HP-UX™, Mac® OS X, Solaris™, and VxWorks™, the required application interfaces that enable InfiniBand I/O consolidation are available today.
In addition, the recent production availability of native InfiniBand storage systems from several leading vendors provide key building blocks to deploy a virtualized data center over a unified InfiniBand fabric.
Highest Price-Performance Benefits
Mellanox’s MHES14 adapter card supports TCP/IP end-to-end latency of less than eight microseconds, scales deterministically, and consolidates I/O onto a single fabric. By comparison, Enterprise Gigabit Ethernet NICs use multiple ports to aggregate bandwidth and support a TCP/IP end-to-end latency four times longer with severely degraded application performance, limited scalability and no efficient consolidation capabilities.
MHES14 Details and Availability
The MHES14 HCA card can be directly inserted into PCI Express x4 or wider slots of standard servers and storage platforms to provide 10Gb/s bandwidth node connectivity for InfiniBand fabrics with a maximum peak throughput of 700MB/s. This HCA features remote direct memory access (RDMA), hardware transport, advanced per queue pair (QP) QoS services and is IBTA specification v1.2 compatible. The HCA card also features MemFree technology using system memory to store connection information instead of local memory on the adapter card itself. This allows lower power (around 3W), lower pricing, and smaller form factor (size of credit card). The HCA utilizes a 4X InfiniBand compliant connector for copper cables providing the lowest cost 10Gb/s connection available. In addition, a pluggable media adapter module can be used for fiber connections up to 300m. The MHES14 is available today at $125 to OEMs in quantities of 10,000 units or $150 in quantities of 5000 units.
Mellanox will be at the Intel Developer Forum, Booth #411 during exhibition hours. Several InfiniBand technologies will be showcased including:
- Data center virtualization demonstrations – one system utilizing VMWare and another system with Xen virtualization software
- InfiniBand clustered database solutions demonstrating leading transactions per second performance and optimal CPU cost-per-transaction
- High-performance computing and storage applications that are taking advantage of InfiniBand double-data-rate 20Gb/s performance
- InfiniBand-over-Cat6 physical media capabilities in partnership with KeyEye Communications
Mellanox Technologies is the leader in high-performance interconnect solutions that consolidate communications, computing, management, and storage onto a single fabric. Based on InfiniBand technology, Mellanox adapters and switch silicon are the foundation for virtualized data centers and high-performance computing fabrics that deliver optimal performance, scalability, reliability, manageability and total cost of ownership.
Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California. For more information on Mellanox’s solutions, please visit www.mellanox.com.
Mellanox is a registered trademark of Mellanox Technologies, Inc. and InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies, Inc. All other trademarks are property of their respective owners.
For more information:
Mellanox Technologies, Inc.
Thad Omura, Vice President of Product Marketing