Mellanox Launches Intel Processor-Based 'Helios' High Performance Cluster to Mellanox Cluster Center

Mellanox 20Gb/s InfiniBand and Dual-Core Intel® Xeon®-based Servers Empower Mellanox Development and Customer Test-bed Environment

SANTA CLARA, CA. - September 25, 2006 – Mellanox ™ Technologies Ltd, a global leader in semiconductor solutions for server and storage connectivity,  today announced the availability of Helios, a high-performance cluster, powered with Mellanox 20Gb/s InfiniBand and Dual-Core Intel® Xeon® 5100 Series Processors, at the Mellanox Cluster Center. The Mellanox Cluster Center is a unique environment for developing, testing, and optimizing products to take advantage of Mellanox’s superior characteristics.

“20Gb/s DDR InfiniBand helps eliminate multi-core environment bottlenecks and provides characteristics for maximizing the cluster’s compute power,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “Helios empowers the Mellanox center with a testing platform utilizing Mellanox’s leading-edge InfiniBand technology and Intel’s industry leading multi-core processors.”

“Companies such as Mellanox are developing the InfiniBand architecture that is a key element for high performance applications in an Intel multi-core environment,” said Jim Pappas, director of initiative marketing at Intel Corporation. “RDMA and network acceleration technologies are necessary for maximizing performance and CPU cycles dedicated for application use, providing the ability to solve more complex problems in less time. This is important for future applications usage, such as virtual vehicle design or earthquake prediction.”

Helios Configuration
Helios is comprised of 32 Rackable Systems c1000 DC powered rack-mount servers, each containing two Dual-Core Intel® Xeon® 5100 Series processors, for a total of 128 cores. In the future, it is anticipated that the cluster will be upgraded to include Quad-Core Intel® Xeon® 5300 series processors, for a total of 256 cores. Mellanox 20Gb/s InfiniBand PCI Express adapters, with MemFree technology, connect the servers together using Flextronics’ 144-port 20Gb/s switches in a full Fat-Tree non-blocking network architecture. Light-weight, 30 AWG InfiniBand cables from W. L. Gore & Associates, Inc. interconnect the servers and switches. Each server includes 8GB FBD host memory from WinTec Industries. The full 128-core Helios configuration provides more than 1TFlop measured with Linpack benchmark.

“The Helios cluster is an ideal configuration for today’s high performance computing environments,” said Colette LaForce, vice president of Marketing at Rackable Systems.  “The combination of Rackable Systems’ highly dense, highly reliable DC powered servers with Mellanox 20Gb/s InfiniBand interconnect and Intel’s power efficient CPUs achieves leading performance per watt for the data center.”

“The Fully-Buffered DIMM architecture provides an easy path to accommodate high-speed and high-capacity memory requirements,” said Simon Chen, WinTec's Senior VP. “We have seen a quick adoption by system integrators on the new Intel Core architecture-based platforms with FB-DIMM.”

“High-performance applications require not only low-latency and high-bandwidth, but also full transport offload and transport flexibility,” said Gilad Shainer, senior manager technical marketing at Mellanox Technologies. “Helios provides a testing environment which takes advantage of the Mellanox architecture for superior cluster efficiency and flexibility.”

Platform Environments 
Helios provides multiple testing environments, as well as options to include private ones that can be rapidly configured and brought-up. It is expected that new distributions from RedHat and Novell, as well as Microsoft Compute Cluster Server 2003, will be available with the OpenFabrics InfiniBand distribution – OFED (OpenFabrics Enterprise Distribution) for the Linux operating systems and WinIB for Windows.

"Mellanox Cluster Center will provide an excellent high performance testing environment for our gridMathematica developers and users," said Joy Costa, Wolfram Partnerships Group at Wolfram Research, Inc. "Mathematica users are known for pushing the limits of technical computing. Mellanox 20Gb/s InfiniBand provides a significant performance boost for our users, enabling them to increase significantly the compellability of the problems they can solve."

Located in Santa Clara, California, the Mellanox Cluster Center provides on-site technical support and enabling scheduled sessions onsite or remotely at http://www.mellanox.com/applications/clustercenter.php

About Mellanox

Mellanox Technologies is a leader in high-performance interconnect solutions that consolidate communications, computing, management, and storage onto a single fabric. Based on InfiniBand technology, Mellanox adapters and switch silicon are the foundation for virtualized data centers and high-performance computing fabrics that deliver optimal performance, scalability, reliability, manageability and total cost of ownership.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information on Mellanox’s solutions, please visit www.mellanox.com.

Mellanox is a registered trademark of Mellanox Technologies, Inc. and InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies, Inc. Sun, Sun Microsystems and Sun Blade are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.  All other trademarks are property of their respective owners.

For more information:
Mellanox Technologies, Inc.
Brian Sparks
408-970-3400
media@mellanox.com


NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.