Mellanox InfiniScale™ IV Switch Architecture Provides Massively Scaleable 40Gb/s Server and Storage Connectivity

Silicon Architecture Enables 120Gb/s Inter-switch InfiniBand Connections, Lower Latency, Unsurpassed Scalability and Adaptive Routing

SC07, RENO, NV – November 12, 2007Mellanox™ Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of semiconductor-based, server and storage interconnect products, announced the InfiniScale IV silicon switch architecture which further extends InfiniBand’s leadership in bandwidth, latency, scalability and optimized data traffic management. InfiniScale IV builds on the success of previous InfiniScale switch products that have been deployed in data centers containing approximately 2 million 10, 20, 30 and 60Gb/s InfiniBand silicon ports. New switch systems based on the InfiniScale IV architecture supporting up to 40Gb/s per port and 120Gb/s for inter-switch links are expected to be available in the latter part of 2008 from several leading server and infrastructure system OEMs. InfiniScale IV products will continue to fuel the fast growing InfiniBand switch system market, which IDC estimates has a port shipment CAGR of 53% over 2006 to 2011*.

“As thousand-node server and storage clusters are becoming mainstream business and research tools, we believe it is critical to provide the most scalable switch infrastructure building blocks that deliver the highest throughput, lowest switch hop latency and highly-efficient hardware-based traffic management capabilities,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “The InfiniScale IV architecture offers next generation I/O performance that properly scales with multi-core CPU systems demanded by enterprise and high performance computing applications including database, design automation, financial services, grids, health services, media creation, oil and gas, virtualization, weather analysis, web services, and more.”

 InfiniScale IV architecture benefits include:

  • 40Gb/s server and storage interconnect -- Rapid advances in server architecture, including multi-core CPUs, faster internal buses, and increased utilization due to virtualization, have driven the need for higher I/O speeds.  Servers are now shipping with the PCI Express Gen2 bus specification which provides 40Gb/s bandwidth on an x8 connection – a perfect match between Mellanox InfiniScale IV switching and upcoming 40Gb/s ConnectX IB adapters.
  • 120Gb/s switch-to-switch interconnect -- InfiniBand users can enjoy 120Gb/s switch to switch bandwidth as early as the end of 2008 (years ahead of other industry initiatives to provide similar levels of bandwidth), using a variety of cabling methods. This link can be used to consolidate multiple cables into a few high-speed links when building large, non-blocking fabrics, simplifying management while reducing cost and complexity.
  • 60 nanosecond switch hop latency -- In 2007 Mellanox began shipping ConnectX IB adapters which deliver 1 microsecond application to application latency. Faster switching through the fabric is now an even more important component of total latency – especially since typical InfiniBand fabrics include 5 or more hops through multiple switch silicon devices.
  • 36-port switch devices for optimal scalability -- The InfiniScale IV architecture will be used to build 36-port switch devices.  This allows InfiniBand switch designers to create switching networks with fewer hops, further reducing the end-to-end latency. For example, a fully non-blocking 648-port switch fabric can be designed with a maximum of 3 switch hops (as opposed to 5 hops required with 24-port switch devices).
  • Adaptive Routing to optimize data traffic flow – A key fabric differentiator of InfiniBand is the use of multiple paths between any two points (all of which can be used unlike Ethernet with Spanning Tree Protocol limitations). When unexpected traffic patterns cause paths to be overloaded, Adaptive Routing in the new architecture can automatically move traffic to less congested paths.
  • Congestion control to avoid hot spots – Congestion control is a complimentary hardware mechanism to adaptive routing which optimizes data rates at the source to most efficiently utilize the full bandwidth of the fabric while avoiding traffic contention scenarios.

“Products built utilizing the InfiniScale IV architecture will enable computing systems tackling complex and challenging workloads to scale to higher performance levels,” said Jie Wu, Research Manager for IDC's Technical Computing Systems program. “This is becoming increasingly important in a number of markets including technical computing and certain enterprise arenas, where applications are more sensitive to I/O bandwidth and latency."

Availability in 2008
Silicon products using the InfiniScale IV architecture will sample to customers in early 2008. Switch systems utilizing this silicon can be expected latter part of 2008.

Come see us at SC07, Reno, NV. Nov 12-16, 2007
Come see Mellanox at SC07, Booth #127, where you can see the robust ecosystem of 20Gb/s InfiniBand and the enabling technologies for 40Gb/s including:

  • Bladed server systems and server motherboards
  • Storage systems with native and backend InfiniBand connectivity
  • InfiniBand switches and gateways
  • 40Gb/s InfiniBand demonstrations using ConnectX IB adapters over copper and fiber cables
  • Over 20 OEMs and ISVs will present in the Mellanox Theater about their InfiniBand-connected products, complimentary software solutions, and the overall benefits to the end-user

Additional information can be found on Mellanox’s website at http://www.mellanox.com/sc07.      

About Mellanox
Mellanox Technologies is a leading supplier of semiconductor-based, high-performance, InfiniBand and Ethernet connectivity products that facilitate data transmission between servers, communications infrastructure equipment and storage systems. The company’s products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox.com.
*IDC, “Worldwide InfiniBand 2007-2011 Forecast,” Doc #206902, May 2007.

###

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995:
All statements included or incorporated by reference in this release, other than statements or characterizations of historical fact, are forward-looking statements. These forward-looking statements are based on our current expectations, estimates and projections about our industry and business, management's beliefs and certain assumptions made by us, all of which are subject to change.

Forward-looking statements can often be identified by words such as "anticipates," "expects," "intends," "plans," "predicts," "believes," "seeks," "estimates," "may," "will," "should," "would," "could," "potential," "continue," "ongoing," similar expressions and variations or negatives of these words. These forward-looking statements are not guarantees of future results and are subject to risks, uncertainties and assumptions that could cause our actual results to differ materially and adversely from those expressed in any forward-looking statement.

The risks and uncertainties that could cause our results to differ materially from those expressed or implied by such forward-looking statements include the continued growth in demand for our InfiniScale IV products, the expected rate of growth for the InfiniBand switch system market, the continued, increased demand for industry standards-based technology, our ability to react to trends and challenges in our business and the markets in which we operate; our ability to anticipate market needs or develop new or enhanced products to meet those needs; the adoption rate of our products; our ability to establish and maintain successful relationships with our distributors; our ability to compete in our industry; fluctuations in demand, sales cycles and prices for our products and services; our ability to protect our intellectual property rights; general political, economic and market conditions and events; and other risks and uncertainties described more fully in our documents filed with or furnished to the Securities and Exchange Commission.

More information about the risks, uncertainties and assumptions that may impact our business are set forth in our Form 10-Q filed with the SEC on August 8, 2007, and our Form 10-K filed with the SEC on March 26, 2007, including “Risk Factors”.  All forward-looking statements in this press release are based on information available to us as of the date hereof, and we assume no obligation to update these forward-looking statements.

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies. All other trademarks are property of their respective owners.
###


For more information:
Mellanox Technologies
Brian Sparks
408-970-3400
media@mellanox.com


NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.