nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Like a celeb going bonkers with botox, Google injects 'AI' into anything it can

Ads giant flashes TPU 2 machine-learning ASIC

By Thomas Claburn, 17 May 2017

Google I/O On Wednesday, Google kicked off its annual developer conference and media spectacle, Google I/O, at the Shoreline Amphitheater, a stone's throw from its Mountain View, California, headquarters.

CEO Sundar Pichai reviewed the requisite user milestones, noting that there are now two billion active Android devices. Then he revisited his long-running oratory about the wonders of artificial intelligence.

Google, he said, is rethinking all its products and services in light of AI-oriented computing, which covers machine learning, image recognition, natural language processing, and other computational processes that give software some semblance of smarts. As a sign of Google's commitment to AI, the advertising giant made its Smart Reply, an AI-flavored email auto-responder, generally available to Gmail users, after a lengthy beta testing period.

Pichai announced the introduction of a service called Google Lens, which he described as "a set of vision-based computing capabilities that can understand what you're looking at and help you take action based on that information."

As an example, he showed the Android camera app displaying the image of a flower, labelled with its name, courtesy of image recognition technology. Google Lens can help identify images in smartphone cameras, and is coming to Google Photos and Google Assistant. It can, for example, translate foreign language text in images, just like Word Lens and Google Translate.

"The fact that computers can understand images and videos has profound implications for our core mission," said Pichai.

Pichai said Google's AI-first approach to computing extends to its data centers. The company has developed a second-generation tensor processing unit (TPU), which it is making available through Google Compute Engine. These cloud-available TPUs are, we're told, each capable of achieving 180 teraflops, and Google's TPU boards, which mount four of them, can be stacked together into pods capable of 11.5 petaflops of computation for machine learning workloads.

"We want Google Cloud to be the best cloud for machine learning," said Pichai, who also announced the launch of Google.ai, a web destination for developers to learn more about AI software. Pichai characterized it as an effort to make AI more accessible to non-specialists.

Google's TPU 2 chips, four on a board ... Source

The TPU is an ASIC: a custom-designed chip from Google. As mentioned above, the web giant claims it can do 180 trillion floating-point operations a second, but did not define what those operations are: they could be 32-bit or 16-bit floating point calculations, or a mix of them, and so on. Google's first-generation TPUs are designed to perform AI inference using 8-bit integers; it's not clear what math precision the second-generation units use.

Nvidia's Volta GPUs can, we're told, achieve 120 teraflops albeit when doing mixed-precision 16 and 32-bit multiply-and-accumulate operations. They drop to 15 TFLOPS when doing 32-bit floating-point calculations, according to Nvidia.

Google didn't offer anything in the way of benchmark comparisons, except to say that the TPU2 smokes IBM's Deep Blue, a computer that's about 20 years old – a comparison that is worrying and odd.

As expected, Google introduced Google Assistant for iOS. The new Google Assistant SDK lets third parties incorporate Google Assistant into their products and apps. And this summer, Google Assistant will understand French, German, Portuguese, and Japanese, with more languages to follow.

The war for the living room

After struggling to come up with a product as popular as the smartphone, the tech giants see signs that the long-simmering war for the living room can be won through voice interfaces and AI-leavened cloud services, anchored in the real world by speaker-equipped hardware.

Amazon's success with its Echo devices – Forrester last October estimated that seven million Amazon Echo devices had been sold – has Apple, Google, and Microsoft scrambling to build bodies for their respective software helpers: Siri, Google Assistant, and Cortana.

Last November, Google released its Home device, as a conduit to reach Google Assistant. The device can play music, set timers, affect networked lights, and interface with Google productivity apps upon command.

According to research consultancy Strategy Analytics, more than four million "Intelligent Home Speakers" were sold globally during the fourth quarter of 2016, with Google's Home accounting for 10 per cent of the market. Amazon's Echo accounted for 88 per cent. The number-crunching biz expects US and European service operators to join the fray with their own surveillance-and-sales interfaces... er, intelligent speakers.

For Google, spoken queries are not much of a stretch from the text-based search queries. Add a bit of natural language processing and it's more or less business as usual, in situations where an Android phone might not be the ideal channel to communicate with the mothership.

With the potential to integrate third-party software, hardware and services, spoken commands become a possibility and platform strength comes into play.

In February, Google addressed the major utility gap between Home and Amazon's Alexa-powered Echo: It introduced the ability to buy physical goods from Google Express retailers like Costco, Walgreens, Whole Foods, and others.

Google Home is becoming useful still, through the ministrations of developers and the ill-considered push to embed networking hardware in every object. Senior director of product management Valerie Nygaard said that Google Assistant can now handle payments, authentication, notifications, and account creation.

Noting that there may be times when you don't want to address Google Assistant out loud, Google engineering director Scott Huffman said the software now accepts text typed into a phone. And soon, he said, it will understand something about objects visible to a connected camera. This will allow Assistant to field queries about what you're looking at.

Google Assistant has gained the ability to schedule appointments and will soon support reminders. Through Google Home, it will gain the ability to place free voice calls in the US and Canada later this summer and will soon support interaction with Spotify's free service, Soundcloud and Deezer. Also later this year, Assistant will be able to utilize image recognition to answer questions from Chromecast users about what's being watched on-screen.

Android O

Google revisited Android O, the as-yet-unnamed next iteration of its mobile operating system. The software reflects a focus on improving device battery life. Features like background execution limits and background location limits attempt to curtail power-hungry processes and location lookups.

It also includes a revised notification system called notification channels. Rather than managing all notifications in a single place, the new approach is app-centric: It treats each app's notifications as a separate, controllable channel.

The big news for Android developers is that Google is now supporting Kotlin as a development language, alongside Java and C++. Android devs may also appreciate the addition of Google Play Protect, an Android app security scanning service, and Play Console Dashboards, for identifying app coding problems – and a sneak peek of Android Studio 3.0 is now out.

Android O will also gain support for TensorFlow Lite, a version of Google's machine learning framework designed to be used on mobile devices. Because AI is everywhere. ®

The Register - Independent news and views for the tech sector. Part of Situation Publishing