This article is more than 1 year old

Microsoft's new 'Adam' AI trounces Google ... and beats HUMANS

Asynchronous design crucial in giving 120-machine cluster the edge in neural-network wars

The battle for neural-network dominance has heated up as Microsoft has developed a cutting-edge image-recognition system that has trounced a system from Google.

The company revealed "Project Adam" on Monday and claimed that the system is fifty times faster and roughly twice as accurate as Google's own DistBelief system.

In one experiment on the ImageNet benchmark, Project Adam was able to correctly sort millions of input images into around 22,000 categories 29.8 percent of the time, versus around 15.8 percent for Google's system and around 20 percent for a typical human.

Project Adam is a weak artificial intelligence system that Microsoft researchers use to process and categorize large amounts of data. Though it has so far been tested on its ability to recognize traits of images, it would work just as well for learning to tell the difference between different bits of text and audio, Microsoft said.

With Project Adam, Microsoft has figured out how to get a powerful learning algorithm to run on lots and lots of computers that are each crunching numbers at different speeds. Put more technically, Adam "is a distributed implementation of stochastic gradient descent," explained Microsoft researcher Trishul Chilimbi in a chat with El Reg.

Though Project Adam uses the same type of learning algorithm as that pushed by Google – "the fundamental training algorithms to train these networks, they're not really new, they're from the 80s," Chilimbi notes – it does so using fewer computers that have been tied together in a more efficient way.

This is because Project Adam is built around asynchronous performance characteristics. The surprising thing was that the asynchronous traits may have given it better performance.

"We hypothesize that this accuracy improvement arises from the asynchronous in Adam which adds a form of stochastic noise while training that helps the models generalize better when presented with unseen data," Microsoft's researchers write in a paper describing the tech and seen by The Register. The paper is still in review, and not public.

By building Adam around asynchronous properties, some parts of the system are occasionally given unexpected bits of data, which it then has to train and optimize against. In the same way that spicing up a boring work day in front of a computer with something non-work related, like a sudden bout of creative swearing or perhaps going to a window and leering at pedestrians on the street below can give a useful jolt to our own grey matter, Project Adam is able to learn more efficiently by sometimes being given out-of-order data.

The asynchronous approach "allows you to jump out of unstable local minima to local minimas that are better," Chilimbi told us.

"Say I'm in a small submersible at the bottom of the ocean and trying to find the deepest point and have very limited visibility around me. If I go in some ridge somewhere and get stuck and look around I think I'm in the deepest spot.

"Now, say, I also have some kind of propulsion system which allows me to jump out of some of these deep things that are not super, super deep, this gives me an opportunity if I jump out of some of these things as a way to find other things significantly deeper."

As for the future, it's likely Microsoft will work to get Project Adam integrated into Microsoft products, just as Google has done with its own image recognition.

"While we have implemented and evaluated Adam using a 120 machine cluster, the scaling results indicate that much larger systems can likely be effectively utilized for training large Deep Neural Networks (DNNs)," the researchers wrote.

Though there's lots more to be done on DNNs, like lashing multiple datatypes together to create systems that develop representations of both image and word concepts and tie them together, Chilimbi admitted that there are some things Project Adam lacks.

In the far future, other areas of AI research are likely to include "more temporal data, much more associative memory," he said – which happens to be the exact area being worked on by former Palm chief and now renegade neuroscientist Jeff Hawkins. ®

More about

TIP US OFF

Send us news


Other stories you might like