This article is more than 1 year old

Hey, boffins, Google wants you to train your AI on video games

'Who knew a constant diet of violence would lead to such a cruel robot army?'

"Honestly, we never thought feeding our nascent artificial intelligence systems with hours and hours of simulated violence could lead to anything bad. It's really a huge surprise!"

At least, that's what we imagine researchers will be saying years from now as they shelter from the orbiting cannons operated by their uncaring machine-gods, given Google's release on Tuesday of a huge data set to help boffins train their machine learning models.

We're many decades away from any true form of independent machine-based intelligence, so the dataset should be welcomed rather than viewed with anxiety, as it gives academics a batch of solid, well understood and formatted data to feed their models.

The dataset comprises of over 100,000 feature vectors extracted from public YouTube videos of people streaming games, Google said, and is stored under the name YouTube Multiview Video Games Data.

Just as Google's own image recognition tech, for instance, has got uncannily good at recognizing paper shredders after being fed a steady diet of images uploaded into the Chocolate Factory, this dataset may help academics tweak models that need to process multiple inputs simultaneously, or rapidly understand an inscrutable environment.

It contains over 120,000 individual videos, each of which is described by up to 3 high level feature families. Each video can be labelled with one of 31 labels. 30 of these correspond to popular video games which were picked at random from a list of the top 100 games on YouTube as of 2012.

"The dataset should be useful particularly for research on multiview (multimodal) learning, including multiview clustering and/or supervised learning, co-training, early/late fusion, and ensemble techniques," the company wrote. "Neither the identity of the videos nor the class labels (video-game titles) are released."

One of the most problematic parts of training machine learning models is feeding them the right data stored in a predictable format.

"Each feature family complements others in providing predictive signals to accomplish a prediction or classification task, for example, in automatically classifying videos into subject areas," explains Google senior software engineer Omid Madani in a separate blog post.

With this data release, Google is sharing some of the crumbs from its heaving table of data with the wider research community, and will likely help researchers trying to train models to spot not only distinct features within videos, but forge associations as well.

For that reason we reckon that the team running the Never Ending Image Learner (NEIL) over at Carnegie Mellon University may find this useful. NEIL's purpose in life is to ingest visual data from Google and figure out associations, such as that cars can sometimes be found on top of roads, and others.

Once it gains the capability to look at video data as well, a quick load in of this data set may help it figure out that aggressors should be sworn at and shot, or purpose that gold coins must be found, or that damsels should always be saved. Actually, that doesn't sound like such a bad way to live! ®

More about

TIP US OFF

Send us news


Other stories you might like