Sponsored Content by Mellanox Technologies This is the third in the series of articles “Super-Connecting the Supercomputers.” In the first article published on June 10, 2019 in HPCwire , we introduced the three interconnect pillars—the connectivity pillar, the network pillar and the communication pillar); in the second article published on July 15, 2019 , we discussed the connectivity pillar in detail.
The second pillar, the network pillar, refers to the network protocol, routing capabilities, and other networking functions. Many articles debating the differences between different networking technologies and implementations have been published over the years; for example “Offloading vs. Onloading: The Case of CPU Utilization”  and “The Ultimate Debate – Interconnect Offloading Versus Onloading” .
We can categorize the networking technologies into two groups – standards-based technologies and proprietary-based technologies. Two examples of standards-based groups are InfiniBand and Ethernet. The proprietary group includes QsNet, Myrinet, Gemini, Seastar, Aries, Tofu, Omni-Path and Slingshot among others.
The ever-growing demand for higher performance in the world of supercomputing requires that interconnect solutions provide increasingly faster speeds, extreme low latency and continuous additions of smart offloading and acceleration engines. In a parallel computing environment, the interconnect is the computer and the heart of the datacenter.
Proprietary networks cannot meet the abovementioned needs over time and therefore the lifetime of a proprietary network is between three to five years. In the past, extending the lifetime of the network was possible; however, with the exponential growth of data we want to analyze, together with the increased simulation complexity and the integration of artificial intelligence and deep learning into high performance computing (HPC), the lifetime of the network has been shrinking over time, and is expected to continue to shrink into the future. Figure 1 – High Performance Computing Interconnect Development A recent example is Intel’s Omni-Path.
The roots of Omni-Path are InfiniBand technology, created by PathScale (InfiniPath adapters) and QLogic/SilverStorm (InfiniBand switch). The PathScale InfiniBand product generation lasted around ten years before Intel transformed it into Omni-Path; once Omni-Path became a proprietary technology, its lifetime clock started ticking. Three years after its first introduction, Intel has announced that Omni-Path is no longer on the company’s roadmap .
There are several challenges and problems that need to be addressed when creating a new proprietary protocol. The first main challenge is to re-create the required software eco-system, including software drivers, operating system support, the creation of communication libraries, and the support of the applications vendors or open source groups. This is a very expensive and long process, and, if required to be done every three years, is a huge burden for HPC end-users. One can assume that those organizations who bought Omni-Path interconnect product in the past, would have chosen differently had they known that their investment, not only in supercomputer hardware purchases, but also in software development, adjustments, settings and troubleshooting, would need to be re-done again..
Another main challenge with proprietary networks is the need to re-invent the basic networking structure and capabilities repeatedly. This places an unnecessary burden not only on the companies doing so, but also on their funding agencies.
In the case of standards-based interconnects, InfiniBand for example, these problems do not exist. Each capability introduced in a previous generation of InfiniBand is carried into future generations, and each generation is backward and forward compatible. The Quality of Service capability for example, an inherent part of the InfiniBand specification, has been in existence since the first generation of InfiniBand, and is being further carried and optimized from one speed generation to the next. All of the software drivers, communication frameworks, native inbox support within the various operating systems, and applications optimizations and tools, continue to leverage hardware support over time, and therefore deliver the highest return on investment for their creators and users.
It is not a surprise that basic network elements, such as Quality of Service or Congestion Control, are promoted as the highlight of new proprietary interconnect technologies because they get re-invented over and over again; in fact, these basic network elements might be served as the sole “differentiating” item for market publicity. It also does not come as a surprise that the new “benchmarks” created for these basic elements demonstrate, effectively, wasted effort. Just imagine a new car manufacturer announcing today “the invention of a round wheel…”.
On the other hand, these basic network elements are already an integral part of the long-lasting standard technologies. Efforts invested here are not wasted; rather, they enable further progress in their performance, scalability and robustness. A good example of such evolvement is the innovative development of smart In-Network Computing engines, such as Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ and Self-Healing technologies such as SHIELD, providing higher resiliency of supercomputers.
The early generations of InfiniBand brought the support for full network-transport offload and RDMA— capabilities that enable faster data movement, lower latency and the dramatic reduction in CPU utilization for the sake of networking operations (which translates into more CPU cycles that can be dedicated to the actual application runtime). Later, RDMA capabilities were extended to support GPUs as well, with GPUDirect® technology, enabling both a ten-fold reduction in latency and a ten-fold increase in bandwidth.
Congestion Control was first implemented and reviewed a decade ago – “Solving Hot Spot Contention Using InfiniBand Architecture Congestion Control” by Pfister, Gusat, Denzel, Craddock, Ni, Rooney, Engbersen, Luijten, Krishnamurthy and Duato, was published in 2004, and “First experiences with Congestion Control in InfiniBand Hardware” by Gran, Eimot, Reinemo, Skeie, Lysne, Huse and Shainer, was published in 2010. InfiniBand adaptive routing was enhanced over the last years, and the EDR InfiniBand generation was tested and verified to provide 96% network utilization using its adaptive routing, with the MPIGraph benchmark done on the Oak Ridge National Laboratory Summit supercomputer.
Therefore, InfiniBand, being a standard interconnect, provides not just word-leading performance and scalability, but also protects past investments and ensures forward compatibility, for best return on investment.
In the next article we will review in detail the major technological elements that InfiniBand offers for the networking pillar, their performance, and their reduction of overall applications runtime.