This article is more than 1 year old

Building the world's biggest telescope array - with machines that don't yet exist

Turning terabytes and exabytes into galaxies at SKA

Once completed, the Square Kilometre Array (SKA) will be the biggest radio astronomy telescope in the world.

"Biggest", though, really is too mild a term for the sheer size of this project. The first phase, SKA1, will be broken up into two instruments, SKA1 MID and SKA1 LOW, based on their frequencies.

SKA1 MID alone is made up of 200 or so dishes spread over a 33,000m2 area – the size of 126 tennis courts. Those radio antennas will pick up a total raw data output of 2TB per second - 62EB a year - or enough content to fill up 340,000 average-sized laptops each day.

That will make SKA1 MID five times more sensitive than the current best instrument in the world, the Karl G Jansky Very Large Array (JVLA), with four times the resolution and sixty times the survey speed.

The project ultimately has two big objectives: one is to look for evidence of gravitational waves by observing a network of stable pulsar stars. The other is to look back at the period of time in the universe when the first stars and galaxies "turned on" and started shining brightly by peeking through holes in hydrogen gas for information about how galaxies are formed. Both works are potentially Nobel-Prize winning affairs.

The idea for SKA formalised in 1993 and while construction on the SKA sites in South Africa and Western Australia won’t start until 2018, SKA architect Tim Cornwell and his team are already busy developing the IT that will power this awesomely data- and compute-heavy project.

And, they are doing so using systems the tech suppliers haven’t even built yet.

Gazing at the cosmos through the power of dreams (and servers)

“We know that according to the current manufacturers’ development paths, around about the time we need it, we’ll be able to buy the requisite compute power,” Cornwell told The Register, rather nonchalantly during a recent interview.

“And it’ll be fairly conventional; it’ll be blade servers arranged in racks and a few of the racks will be tightly connected with loose connections to other racks in compute islands.”

The type of hardware might be conventional, but the computing power will need to be around three times more powerful than the most powerful supercomputer in 2013, equivalent to the processing power of around a hundred million PCs or more than 100 petaflops of raw processing power.

For the record, the most powerful super of 2013 is, officially, the National Super Computer Center in Guangzhou's Tianhe-2 (MilkyWay-2) Intel Xeon E5-2692 2.200GHz cluster running 3,120,000 cores. The SKA will also employ millions of CPU processors operating in parallel. It topped the Supercomputer 500 list three times last year.

The Netherlands Institute for Radio Astronomy (ASTRON) and IBM are in the middle of a five-year collaboration to research the fast, low-power exascale computer systems that SKA will need.

The partnership is studying exascale computing, data transport and storage processes as well as the streaming analytics that will be needed to read, store and analyse the raw data. Some of their ideas for the beyond-state-of-the-art supercomputing include novel optical interconnect technologies and nanophotonics for large data transfers and high performance storage systems based on next-gen tape systems and new phase-change memory technologies.

Yet, we're told, building data processing centres in Perth and in Cape Town with bits that don’t exist yet isn’t the hard part – it’s the software that’s the real head-scratcher.

More about

TIP US OFF

Send us news


Other stories you might like