This article is more than 1 year old

STUDENT RACK WARS: Science and HPC – all the kids are doing it these days

New Orleans students grapple brain-crunching apps

HPC blog It’s well past the time to discuss and analyse what happened at the exciting seventh annual Student Cluster Competition, which took place at SC14 in New Orleans late last year.

Just after SC14 closed, I took a long trip to South Africa to cover its CHPC cluster competition – the intra-country bout that has spawned its ISC’13 and ISC’14 Champion teams.

The event in New Orleans was the biggest SC competition to date, with 12 student teams representing universities in seven different countries.

The structure was the same as in recent years: students work with their faculty and sponsors to build the fastest HPC cluster possible – with an absolute power cap of 3,000 watts (120 volts, 26 amps). They can use any hardware they can get as long as it’s commercially available by the time of the competition.

For more detailed info about the student cluster competitions in general, look here.

The SC competitions are 46-hour marathons of HPC benchmarks and scientific applications. The slates of apps for the 2014 tourney were a mixed bag of both familiar and new workloads. Let’s take a look...

HPCC: The competition always kicks off with a good old HPCC run. Students spent Monday running the entire eight-app HPCC suite plus a separate HPL (LINPACK) run that would be used to award the coveted Highest LINPACK award.

ADCIRC: This is new to cluster competitions. It’s an abbreviation for Advanced Circulation Model, and it’s used to model things such as storm surges, tides, and how wind moves water around. Want to figure out how an oil spill might spread in a storm? ADCIRC has the answers, or at least the ability to model the result based on your estimates.

NAMD: This molecular dynamics program has been used in several past competitions. It scales like a Fisher weasel, capable of running on anywhere from a handful to 500,000 CPU cores.

It’s used to model and simulate the way large numbers of atoms and such react under different conditions.

Let's say you have a big pile of atoms in a bowl. You’d use NAMD to model what would happen if you added too many atoms and they spilled onto the countertop. It’s typically used to model more complex problems than that, however.

MATLAB: If you do serious math, sooner or later you’ll run into MATLAB. It was developed to give students the ability to run complex math on computers without having to learn Fortran (for which it’s earned the gratitude of millions).

It’s really a mathematical computer language and environment rather than an application like Excel. As you’d expect, MATLAB has a lot of functions, but it didn’t forget the favourites such as addition, subtraction, multiplication, and division.

Mystery Application (Enzo): Enzo is a cosmetology simulation that models how various types/amounts of cosmetics ... cosmological simulation used to model a wide range of cosmic and planetary effects.

If you were trying to figure out how stars formed, lived, and what happens when they die, you’d want to get yourself some Enzo. Students didn’t learn about the mystery application until after the competition was well underway – meaning that they couldn’t prepare for it, which always makes things more interesting.

Each of the applications has several data sets, or tasks, that need to be completed. The competition organizers make sure that the amount of computation is always more than the teams can complete in the allotted 46 hours. This puts a premium on planning and workload management – teams that can keep their hardware busy running multiple tasks at once will get more work done and get higher scores.

Application experts also interview the teams to ensure that the students understand what the application does, why it’s important, and how to wring the best performance out of it. Interview scores are combined with the objective application results to come up with a final score for each team.

Next up: Configurations and figurations

Given that all 12 teams are running the same applications, have the same component choices available to them, and are contending with the same power constraint, you’d assume that their hardware configurations would be almost identical. But you’d be wrong!

This year we saw clusters ranging from four-node, 96-core mini-clusters to nine- and ten-node, 288-core monsters.

Most teams were packing GPUs, and deploying them in new and different ways than we’ve seen before. We also had a bit of liquid cooling in the mix as well. Details in our next report, so watch this space ...®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like