This article is more than 1 year old

Honey I shrunk the chip ... now what?

Lumpy atoms

Bigger is better in pastries, paychecks and bank accounts, but not in electronics. A recent story in HPCwire caught my interest and got me thinking about what the end of the shrink road might portend – and the potential alternatives.

The ability to steadily shrink the size of the processor brains that drive computers – and pretty much everything else – has driven computer performance since the advent of the microprocessor.

But now that we are at 32nm (nanometer) and moving toward 16nm and even 14nm (see Intel’s recent announcement), we don’t have all that many nm to go until we hit the limits of what's possible under the laws of physics. When you get too small, you can run into problems at atomic scale.

IBM Fellow and all-things-chip guru Bernie Meyerson explained this clearly and concisely several years ago when he predicted that Intel's single-core 5GHz chip would never see the light of day.

With images from an electron microscope, he showed how extremely small chip pathways can be reduced to the point where they are just a few atoms thick. This sounds fine until you learn that atoms aren't nice, round balls the way they are presented in textbooks.

Lumpy atoms

Atoms can be kind of lumpy. When you have only a few forming a guardrail on your chip electronic roadways, they allow electricity to leak through, which leads to more heat and energy use. Cranking up the GHz in a chip increases the heat generated to the point where it surpasses the ability of the materials to handle it.

This physical limitation on processor frequency led us to the multiple core world we see now. The only way to get more performance out of processors is to use the real estate gained by shrinking on-die components to provide duplicate cores and run parallel workloads on them at reasonable frequencies.

Some options for future chip designs are discussed in the HPCwire story, including HP’s compute-memory hybrid memristors, which could come to market as a flash substitute this year. Joint research by IBM and Samsung into carbon nanotubes is also mentioned. I think we will see a combination of different technologies come into play as we bump up against the shrinking benefits of process shrinking. (Wow, that's going out on a limb, isn't it?)

The real problem is not that we are not getting enough cycles out of processors: it's that the speed at which data moves from memory to processor and back again has not really increased all that much over the past several years. That's the biggest bottleneck we're facing, and faster processors with more cores doesn't really solve it unless the problem set is completely parallel.

What's the solution? I have no idea ... but people who are much better equipped than I are working on it. All I know is that it is going to need a cool name ... maybe something with "turbo" or "fire" in it. ®

More about

TIP US OFF

Send us news


Other stories you might like