This article is more than 1 year old

Supercomputers get their own software stack – dev tools, libraries etc

OpenHPC group to take programmers to the highest heights

SC15 Supercomputers are going to get their own common software stack, courtesy of a new group of elite computer users.

The OpenHPC Collaborative Project was launched just before this week's Supercomputer Conference 2015 in Austin, Texas, and features among its members the Barcelona Supercomputing Center, the Center for Research in Extreme Scale Technologies, Cray, Dell, Fujitsu, HP, Intel, Lawrence Berkeley, Lenovo, Los Alamos, Sandia and SUSE – in other words, the owners and builders of the world's biggest and fastest machines.

The project describes itself as "a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries."

It comes with the backing of the Linux Foundation, hardly surprisingly since the open-source software is used in virtually every supercomputer in the world.

Just six of the top 500 supercomputers don't use GNU/Linux, and all of them use some flavor of Unix, so no look-in for Windows nor OS X.

Supercomputers also provide their own unique problems. To the extent that the software issue for the monster machines got its own mention in a US Presidential Executive Order in July:

Current HPC [high-performance computing] systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code. The level of expertise and effort required to develop HPC applications poses a major barrier to their widespread use.

This new group hopes to resolve at least some of those problems with pre-built packages that will include "re-usable building blocks." In other words, programmers can get up to speed faster, and write portable code for more than one supercomputer, no matter their architecture, thus creating a more viable workforce for programming the beasts.

In addition there are "plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability." And if you are a budding supercomputer programmer then all the code will be made freely available.

According to the announcement, there are four main goals with the project:

  • Create a stable environment for testing and validation: The community will benefit from a shared, continuous integration environment, which will feature a build environment and source control; bug tracking; user and developer forums; collaboration tools; and a validation environment.
  • Reduce Costs: By providing an open source framework for HPC environments, the overall expense of implementing and operating HPC installations will be reduced.
  • Provide a robust and diverse open source software stack: OpenHPC members will work together on the stability of the software stack, allowing for ongoing testing and validation across a diverse range of use cases.
  • Develop a flexible framework for configuration: The OpenHPC stack will provide a group of stable and compatible software components that are continually tested for optimal performance. Developers and end users will be able to use any or all of these components depending on their performance needs, and may substitute their own preferred components to fit their own use cases.

The uniqueness of the big machines has caused "duplication of effort and has increased the barrier to entry," according to the Linux Foundation's Jim Zemlin. "OpenHPC will provide a neutral forum to develop an open source framework that satisfies a diverse set of cluster environment use-cases." ®

More about

TIP US OFF

Send us news


Other stories you might like