This article is more than 1 year old

Swinburne starts design of pulsar-hunting supercomputer

Australian Uni plans LGM hunt with FPGAs

Back when they first discovered pulsars – in the “Little Green Men” era of the 1960s – astronomers were seeing big, loud and slow pulses. Today's pulsar-hunters are hunting subtler beasts and therefore need a lot more computer power, which is why Australia's Swinburne University has decided to spend more than $AU600,000 to design a computer to join in the search.

The grant, announced at the end of May, is to design a machine that will be the pulsar signal processor for the Square Kilometre Array.

As project leader and Swinburne senior lecturer Dr Willem van Straten explained to The Register, compared to LGM-1 the pulsars of interest to physics in 2013 are “faster, weaker, and have travelled a greater distance”. All of this means that instead of spotting the pulsar with the naked eye and looking at a trace from an antenna, a lot of computing power is needed to distinguish the pulsar's signal – having suffered a lot of interstellar dispersion along the way – from the background noise.

Whereas LGM-1 had a pulse that repeated every ~1.337 seconds, the SKA will be looking for weak pulsars spinning hundreds of times per second.

To get ready for the pulsar search, Swinburne has begun 2½ years of design work to meet the requirements of very high I/O (both on the network connection delivering signals from the SKA's hundreds of antennas, and within the data centre); and very high performance processing.

Much of the computing will be straightforward multiplication and addition of vast amounts of complex numbers, Dr van Stratten said, but there will also be a requirement to carry out large numbers of fast Fourier transforms (FFTs).

First, the incoming pulse profiles will be averaged as a function of the pulsar's phase, and that will be divided into “bins”. “We might need five hundred bins to resolve the structure of a pulsar”, Dr van Stratten said.

Then, the frequency channels will be narrowed to correct for interstellar distortion, and the “cleaned” signal will be put back into the time domain for study.

One interesting debate that will be resolved during the design process will be to decide whether GPUs are suitable for the task, or whether it would be better to design an FPGA-based or ASIC-based processing system.

Dr van Stratten told The Register that throughput isn't the only consideration the designers will be working with, since “power consumption is a limiting factor in the design”.

Although GPU vendors like Nvidia are working hard on power consumption (to keep up with the requirements of the mobile age), there may be other reasons to attempt an FPGA-based design, since the tools now available to end users make it possible for a non-electrical engineer to design an algorithm that can be implemented onto the device for fast processing.

Such attractions have already been noticed elsewhere. For example, advanced Bitcoin miners now routinely use FPGAs in their hunt for the crypto-gold.

That will be resolved by 2016, when the design is complete and construction of the supercomputer will begin.

Partners in the design consortium include the National Research Council of Canada, the Science and Technology Facilities Council (UK), Oxford University, the University of Manchester, the Max Planck Institute for Radio Astronomy (MPIfR), SKA South Africa and the International Centre for Radio Astronomy Research. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like