InfiniBand White Papers

General White Papers

White Paper Synopsis
Powering 8K Video for Next-Generation IP Broadcasting (2017) Next generation high performance IP-based studios are revolutionizing the broadcast industry. This trend is even more apparent when it comes to 4K/Ultra High Definition Video (UHDV), 8K with/without High Dynamic Range (HDR), High Frame Rate (HFR), and other technologies.
Deploying Apache™ Hadoop® with Quanta QCT and Mellanox VPI Solutions
(May 2014)
In this article we will review a 5 node cluster configuration. Scaling the deployment is easily done by adding more Slave Nodes to the deployment. When scaling the deployment take into consideration the amount of RAM you have in the Master Node as well as the disk space.
Highly Accurate Time Synchronization with ConnectX®-3 and TimeKeeper®
(March 2013)
Upgrading your trading platforms to reliable and precise time is achievable at a low cost and a rapid deployment model via the combination of Mellanox's ConnectX®-3 network adapter cards and TimeKeeper® Client software. TimeKeeper can assure sub-microsecond time precision from both the newer IEEE 1588 Precision Time Protocol (PTP) or the standard Network Time Protocol (NTP) over shared (not dedicated) network links. Flexibility in time sources and automatic adapability to network quality allows for incremental changes to enterprise systems, and immediate high precision timing in critical components while less critical components see incremental performance improvement. For high quality links and time feeds, applications can see time locked to reference well within 500 nanoseconds of variation.
FDR InfiniBand is Here
The high-speed InfiniBand server and storage connectivity has become the de facto scalable solution for systems of any size – ranging from small, departmental-based compute infrastructures to the world's largest PetaScale systems. The rich feature set and the design flexibility enable users to deploy the InfiniBand connectivity between servers and storage in various architectures and topologies to meet performance and or productivity goals. These benefits make InfiniBand.
InfiniBand FAQ
(Revised Decemeber 2014)
Frequently asked questions about the InfiniBand protocol and products.
Security in Mellanox Technologies InfiniBand Fabrics
InfiniBand is a new systems interconnect designed for Data Center Networks, and Clustering environments. Already, it is the fabric of choice for high-performance computing, education, life sciences, oil and gas, auto manufacturing and increasingly financial services applications.
TIBCO, HP and Mellanox High Performance Extreme Low Latency Messaging
With the recent release of TIBCO FTL™, TIBCO is once again changing the game when it comes to providing high performance messaging middleware. Many solutions have emerged to try and provide next generation systems with extreme low latency but they are doing this by sacrificing the traditional features and functions that mission critical middleware solutions require. TIBCO's approach is to offer a middleware solution that offers extreme low latency without sacrifice, allowing for the scalability not only to meet the demands for low latency data distribution but also to meet the demands as the application grows from a few instances to thousands of instances.
Informatica, HP, and Mellanox/Voltaire Benchmark Report: Ultra Messaging accelerated across three supported interconnects
(February 2011)
The securities trading market is experiencing rapid growth in volume and complexity with a greater reliance on trading software, which is supported by sophisticated algorithms. As this market grows, so do the trading volumes, bringing existing IT infrastructure systems to their limits.
The Case for Low-Latency Ethernet
(March 2009)
The industry momentum behind Fibre over Ethernet (FCoE) sets some significant precedence that raises questions about what is the best approach for server to server messaging (or inter process communication or IPC) using zero-copy send/receive and remote DMA (RDMA) technologies over Ethernet.
The Case for InfiniBand over Ethernet
(April 2008) ( 日本語)
There are two competing technologies for IPC – InfiniBand and iWARP (based on 10GigE). If one were to apply the same business and technical logic behind the initial success of FCoE, one would conclude that InfiniBand over Ethernet (IBoE) makes the most sense. Here is why.
Importance of Unified I/O in VMware® ESX Servers
(March 2008) ( 日本語)
When it comes to unifying I/O on the servers, there are only two options – 10GigE NICs or InfiniBand HCAs. What should you deploy, especially in VMware ESX server environments?
InfiniBand Software and Protocol White Paper
(December 2007)
The InfiniBand software stack is designed ground up to enable ease of application deployment. IP and TCP socket applications can avail of InfiniBand performance without requiring any change to existing applications that run over Ethernet.
Using RDMA to increase processing performance
(April 2007)
Applications are increasing the demand for CPU processing performance and the amount of data being transferred between subsystems. Offloading data movement to I/O hardware increases the amount of CPU resources available for these applications, boosting the system’s performance.
Why Compromise? - A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics
(November 2006)
A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics.

Data Center White Papers

White Paper Synopsis
Maximizing Server Performance with Mellanox Socket Direct™ Adapter
(December 2017)
With the exponential growth of data, enterprises and Cloud providers demand higher performance from servers and compute resources to perform real-time analysis on vast amounts of data. Data center servers are typically equipped with a multi-socket CPU board and a single high-speed network adapter. This paper explains how Mellanox's innovative Socket Direct technology can maximize data center return on investment by delivering much higher performance for multi-socket servers, reaching up to 25% more throughput, while reducing latency by up to 80% and reducing CPU utilization by up to 60%.
Faster Interconnects for Next-Generation Data Centers
(July 2015)
With the data deluge washing over today's data centers, IT infrastructure benefits from faster interconnects. Faster storage requires faster networks. Even more performance can be achieved by using iSER, a maturing standard for extending iSCSI with RDMA (Remote Direct Memory Access). Using iSER, high-performing storage can be connected to fast Ethernet links via iSCSI, speeding data transfers from the network to servers and storage systems. These technologies can be used together to replace aging high-speed interconnects, such as Fibre Channel links and older Ethernet links.
Turn Your Data Center into a Mega-Datacenter
(September 2013)
This paper describes the advantages of Mellanox's MetroX long-haul switch system, and how it allows you to move from the paradigm of multiple, disconnected, localized data centers to a single multi-point meshed meg adatacenter. In other words, remote data center sites can now be localized through long-haul connectivity, providing benefits such as faster compute, higher volume data transfer, and improved business continuity.
Power Saving Features in Mellanox Products
(January 2013)
This paper introduces the "green" fabric concept, presents the Mellanox power-efficient features under development as part of the European-Commission ECONET project, displays a real-world data center scenario, and outlines additional steps to be taken toward "green" fabrics. The features described in this paper can reduce power consumption by up to 43%. When summed over a real-world data center scenario, a total reduction of 13% of all network components power consumption is demonstrated. This reduction can amount to millions of dollars in savings over several years.
Introduction to Cloud Design

Cloud computing is a collection of technologies and practices used to abstract the provisioning and management of computer hardware. The goal is to simplify the users experience so they can get the benefit of compute resources on demand; or in the language of cloud computing "as a service".
Cut I/O Power and Cost while Boosting Server Performance
(April 2009)
I/O technology plays a key role in the reduction of space and power in the data center, reducing TCO, and enhancing data center agility.
Virtualizing Data Center Memory for Performance and Efficiency
(February 2009)
By combining RNA Networks’ Memory Virtualization Platform with Mellanox Technologies’ unrivaled connectivity performance, data center architects can achieve new levels of performance with high efficiency and lower costs.
Consolidating Network Fabrics to Streamline Data Center Connectivity
(February 2007)
Cost and performance issues are pushing developers to seek convergence of interconnects in data centers. Both 10 Gigabit Ethernet and InfiniBand appear to have potential, but the demands are militating against Fibre Channel.
I/O Virtualization Using Mellanox InfiniBand and Channel I/O Virtualization (CIOV) Technology
(January 2007)
Server virtualization technologies offer many benefits that enhance agility of data centers to adapt to changing business needs, while reducing total cost of ownership.
InfiniBand in the Enterprise Data Center
(April 2006)
InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership.
InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time
(December 2005)
Server and storage clusters benefit today from industry-standard InfiniBand’s price, performance, stability, and widely available software leading to a convergence in the data center.
Deploying Quality of Service and Congestion Control in InfiniBand-based Data Center Networks
(November 2005)

The InfiniBand architecture defined by IBTA includes novel Quality of Service and Congestion Control features that are tailored perfectly to the needs of Data Center Networks.

Cloud White Papers

White Paper Synopsis
Introduction to Cloud Design

Cloud computing is a collection of technologies and practices used to abstract the provisioning and management of computer hardware. The goal is to simplify the users experience so they can get the benefit of compute resources on demand; or in the language of cloud computing "as a service".
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid & Mellanox
Compute and storage virtualization has enabled elastic creation and migration of applications within and across data centers. Unfortunately, on-demand network infrastructure provisioning in multi-tenant cloud environments is still very rudimentary due to the complex nature of Physical Network Infrastructure (PNI) and limited network functionality in hypervisors. Even after this complexity is somehow mastered by the cloud operations and IT teams, even simple reconfigurations (let alone complex upgrades) of the network remain largely error-prone.
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
We already have 8 or even 16 cores on one CPU chip, hardware-based CPU virtualization, servers with hundreds of gigabytes of memory and Numa architectures with endless memory bandwidth (hundred GB/s memory traffic with standard server), and even the disks are now much faster with SSD technology. So it seems like we can now efficiently consolidate our applicat ions to much fewer physical servers, or are we missing anything?
Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers over IB
This white paper demonstrates the capabilities and performance for Violin Windows Flash Array (WFA), a next generation All - Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, WFA provides built - in high performance, availability and scalability by the tight integration of Violin's All Flash Array and Microsoft Windows Server 2012 R2 Scale - Out File Server Cluster.

HPC White Papers

White Paper Synopsis
Introducing 200G HDR InfiniBand Solutions
(January 2018)
Over the past decade, no one has pushed the industry forward more than Mellanox. As the first to 40Gb/s, 56Gb/s and 100Gb/s bandwidth, Mellanox has both boosted data center and cloud performance and improved return on investment at a pace that far exceeds Moore's Law and even exceeds its own roadmap. To that end, Mellanox has now announced that it is the first company to enable 200Gb/s data speeds with Mellanox Quantum™ switches, ConnectX®-6 adapters, and LinkX™ cables combining for an end-to end 200G HDR InfiniBand solution in 2018. By doubling the previous data rate, only Mellanox can provide the necessary speed to meet the demands of the world's most data-intensive applications.
Mellanox In-Network Computing and Next Generation HDR 200G InfiniBand
(January 2018)
With the exponential growth of data that needs to be analyzed and the data resulting from ever-more complex workflows, the need for faster data movement has never been more challenging and critical to the worlds of High Performance Computing (HPC) and machine learning. Mellanox Technologies, the leading global supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers, storage, and hyper-converged infrastructure, is once again moving the bar forward with the introduction of and end-to-end HDR 200G InfiniBand product portfolio.
Real Solutions for the Challenges of the Post-Petascale Era
(December 2014)
Pushing the frontiers of science and technology will require extreme-scale computing with machines that are 500-to-1,000 times more capable than today's supercomputers. As researchers continuously refine the models and push increased resolutions, the demand for more parallel computation and advanced networking capabilities is paramount. As a result of the ubiquitous data explosion and the ascendance of big data, especially unstructured data, today's systems need to move enormous amounts of data as well as perform more sophisticated analysis; the interconnect truly becomes the critical element of enabling the use of data.
Fraunhofer ITWM demonstrates GPI 2.0 with Mellanox Connect-IB® and Intel® Xeon Phi
(June 2013)
Over the last decade, specialized heterogeneous hardware designs ranging from Cell over GPGPU to Intel Xeon Phi have become a viable option in High Performance Computing mostly due to the fact that these heterogeneous architectures allow for a better flops-per-watt ratio than conventional multi-core designs. The upcoming new GASPI standard will able to bridge this gap in the sense that GASPI can provide partitioned global address spaces (so called segments), which span across both the memory of the Host and e.g. an Intel Xeon Phi.
Performance Optimizations via Connect-IB® and Dynamically Connected Transport™ Service for Maximum Performance on LS-DYNA®
(June 2013)
From concept to engineering, and from design to test and manufacturing, the automotive industry relies on powerful virtual development solutions. CFD and crash simulations are performed in an effort to secure quality and accelerate the development process. LS-DYNA® relies on Message Passing Interface (MPI) for cluster or node-to-node communications, the de-facto messaging library for high performance clusters. MPI relies on fast server and storage interconnect in order to provide low latency and high messaging rate. The more complex simulation being performed to better simulate the physical model behavior, the higher the performance demands from the cluster interconnect are.
Introduction to InfiniBand for End Users: Industry-Standard Value and Performance for High Performance Computing and the Enterprise
(June 2010)
InfiniBand is not complex. Despite its reputation as an exotic technology, the concepts behind it are surprisingly straight forward. One purpose of this book is to clearly describe the basic concepts behind the InfiniBand Architecture.
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
(June 2010)
The cluster interconnect is very critical for efficiency and performance of the application in the multi-core era. When more CPU cores are present, the overall cluster productivity increases only in the presence of a high-speed interconnect. We have compared the elapsed time with LSDYNA using 40Gb/s InfiniBand and Gigabit Ethernet.
CORE-Direct: The Most Advanced Technology for MPI/SHMEM Collectives Offloads
(May 2010)
Mellanox CORE-Direct technology provides the most complete and advanced solutions for offloading the MPI collectives operations from the software library to the network. CORE-Direct not only accelerates MPI applications but also solves the scalability issues in large scale systems by eliminating the issues of OS noise and jitter.
NVIDIA GPUDirect Technology - Accelerating GPU-based Systems
(May 2010)
The new NVIDIA GPUDirect technology when used with Mellanox InfiniBand enables NVIDIA Tesla and Fermi GPUs to communicate faster by eliminating the need for a CPU to be involved in the communication loop and the need for the buffer copy. The result is increased overall system performance and efficiency by reducing the GPU-to-GPU communication time by 30%.
Accelerating Automotive Design with InfiniBand
(February 2009)
CAE simulation and analysis are highly sophisticated applications which enable engineers to get insight into complex phenomena and to virtually investigate physical behavior. In order to produce the best results possible these simulation solutions require high-performance compute platforms. In this paper we investigate the optimum usage of high-performance clusters for maximum efficiency and productivity, for CAE applications, and for automotive design in particular.
Optimum Connectivity in the Multi-core Environment
(March 2007)

Mulit-core is changing everything. What do you think the effect mulit-core has on the interconnect requirements for your cluster? Hint: More cores need more interconnnect.
Single-Points of Performance
(December 2006)
The most common approach for comparing between different interconnect solutions is the “single-points” approach.
Real Application Performance and Beyond
(December 2006)

The interconnect bandwidth and latency have traditionally been used as two metrics for assessing the performance of the system’s interconnect fabric. However, these two metrics are typically not sufficient to determine the performance of real world applications.
Weather Research and Forecast (WRF) Model Port to Windows: Preliminary Report
(November 2006)

The Weather Research and Forecast (WRF) project is a multi-year/multi-institution collaboration to develop a next generation regional forecast model and data assimilation system for operational numerical weather prediction (NWP) and atmospheric research.
Scale up: Building a State-of-the Art Enterprise Supercomputer
(May 2006)

Building a state-of-the-art enterprise supercomputer requires a partnership among vendors that supply commodity parts.

Storage White Papers

White Paper Synopsis
Unlock In-Server Flash with InfiniBand and Symantec Cluster File System
(December 2013)
While 10Gb/s and 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100 and 200Gb/s . Both Ethernet and IB have a considerable advantage over FC. Mellanox InfiniBand provides a high throughput and low latency interconnect to ship data across servers and storage systems. Although traditionally used in high-performance computing (HPC) environments, InfiniBand provides the capability to unlock the potential of in-server flash.
Building a Scalable Storage with InfiniBand
It will come as no surprise to those working in data centers today that an increasing amount of capital and operational expense is associated with building and maintaining storage systems. Many factors drive the need for increased storage capacity and performance. Increased compute power and new software paradigms are making it possible to perform useful analytics on vast repositories of data. The lowering cost per Gigabyte is making it possible for organizations to store more granular data and to keep data for longer periods of time
Mellanox InfiniBand FDR 56Gb/s For Server and Storage Interconnect Solutions
(June 2011)
Choosing the right interconnect technology is essential for maximizing systems and applications performance and efficiency. Slow interconnects delay data transfers between servers, causing poor utilization of the system resources and slow execution of application.
InfiniBand for Storage Applications
(December 2007)
Storage solutions can benefit today from the price, performance and high availability advantage of Mellanox’s industry-standard InfiniBand products.

Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.