This article is more than 1 year old

What 2017 holds for AI: Will you fear or embrace our machine overlords?

GPU engines are out and AI translators will help you pull

From voice translation to self-driving automobile, AI's impact in everyday life will become more and more apparent this year. The AI and deep learning market will experience even more rapid technological advancement, very rapid growth and adoption, and increasing competition for both hardware and software platforms. While AI fears will remain, the public will become more cognisant and comfortable with social media AI applications.

Deep learning training: GPUs and more

Deep learning training lends itself to what we call "High Density Processing". High density processing applies when algorithms are computationally intensive, having higher ratios of compute operations per byte of memory bandwidth.

In such cases dense clusters of multicore CPUs hosting accelerator technology can provide highly favourable cost-performance and performance per watt. GPUs, because of their ability to provide high density processing, have enabled deep learning computations and have dominated recently.

In 2017, we will start seeing a move from the near monopoly of GPUs for training to hosting on a rather wide variety of multicore and accelerator technologies. These will include Knights Landing/Knights Mill chips and AI accelerators implemented as FPGAs or ASICs. But the GPU will still be widely used.

Short fixed-point arithmetic can offer order of magnitude performance advantages over floating point (see here). These low-power solutions with special purpose architectures can demonstrate better price/performance than even GPUs.

Cloudy options abound with Amazon, Baidu and Microsoft having augmented their GPU-based cloud offerings with FPGA options for AI applications, and Google "supercharging" their Cloud AI with ASICs known as Tensor Processing Units employing short arithmetic.

Intel will also be a leader in bringing accelerators to market, and the combination of Knights Landing plus the Nervana Engine technology to be unveiled later this year looks particularly intriguing.

So in 2017, GPU dominance will be eroded. We think it is premature to talk about a "post-GPU" era, and expect GPUs to maintain a very comfortable lead, but we do expect a much richer mix of technologies to emerge.

Deep learning software libraries proliferate

Deep learning isn't just about the hardware; software libraries that enable algorithms that take advantage of said hardware and that put the technology into more hands are maybe even more important. We're seeing several libraries battle for dominance in the AI arena. Google's TensorFlow has leapt to the forefront on GitHub and Intel recently responded with their BigDL deep learning framework for Spark. Theano, Microsoft's CNTK and many others – the vast majority of which have CUDA support – will compete eagerly for developer mindshare. It's too early to call the race, but our prediction is that Microsoft and Intel are the most likely companies to give Google a run for the money.

What's in it for Microsoft is promotion of their software ecosystem, especially around Big Data and IoT. Intel wants increased hardware sales, not surprisingly. And Google appears to be most interested in growing their developer ecosystem to gather new applications that they can then monetise in areas such as self-driving automobiles.

Voice translation breaks out

Voice translation will be one of the biggest breakout application segments. International travellers will begin to use it regularly on their mobile phones for short conversations, including ordering food and coffee, buying train tickets, and other shopping.

Text translation in messaging apps has become routine, especially for certain language pairs, facilitating communication between lovers, family and friends, and international project team members.

The many "Lost In Translation" occurrences lead to abundant laughter, frustration and misunderstandings, and even breakups. Despite the limitations of machine translation, the appeal will be irresistible. Usage in personal social interaction will initially be much greater than for business. Could this be the killer app for consumer AI?

AI fears will vary

AI fears in some respects will ease as the public becomes more cognisant and comfortable with AI applications that are accessed from, or that support applications running on, their mobile-based social media platforms (Google, Facebook, and Twitter in particular).

But concerns around governments' electronic monitoring of social media content and face recognition in public spaces will remain. Facebook, Twitter and others will struggle with the appropriate level of tuning of AI solutions to filter out fake news, offensive videos, and hate speech. Their complicity with nondemocratic government requests for censorship will grow at the expense of freedom of expression. In addition, concerns around middle-class job losses to automated machine learning systems will continue to grow as the globalisation backlash continues.

Healthcare AI will progress steadily

A raft of AI applications in healthcare – including for diagnosis, patient monitoring, and even clinical trials – will make steady progress, but there will be no major breakthroughs.

Patient interest will grow significantly as AI healthcare case studies become more numerous and positive outcomes recorded. AI will be seen in a very positive light for medical imaging evaluation and diagnosis, and will begin to lead to significant cost savings. Trial usage for laboratory tests (blood, urine) will grow, but lag usage for imaging applications significantly.

Although patients might be concerned about robots replacing doctors, it will be enhancement, not replacement that is relevant. Generally the patient will not know when doctors and nurses are using AI to support healthcare decisions.

As one example, IBM's Watson technology reached the same diagnosis as oncologists in 99 per cent of cancer cases examined, yet it was also able to explore a wider range of options, since it can extremely rapidly explore the medical literature. Second opinions will be arrived at in realtime, which will save everyone time and money.

Self-driving cars will hit speed bumps

Self-driving automobile ("auto-automobile") technology will advance, but will experience speed bumps and citizen backlash as a growing number of trials leads to more accidents, including fatalities, even as statistics point to a significant reduction in accidents. Local and national government restrictions will tighten, and trials will increasingly be focused on lower-risk driving scenarios.

In the high-risk arena, military interest in self-driving ground-based vehicles will become very evident, due to the potential savings of lives and money, and the prospects for using such vehicles to confuse the enemy since soldier casualties will be removed from the equation.

So, to sum up: You're not going to exclusively use GPUs as your AI engine forever, and you're going to have a wide range of choices when it comes to AI libraries. You'll be using AI language translators for pick-up lines on your next international business trip, making you more comfortable with AI applications, but you'll still be afraid of what the government might do with the same technology, and that an AI might take your job.

You'll be healthier because AI medical care applications will speedily diagnose and recommend treatments for your injuries and ills. And, finally, there's a slightly better chance you'll need this enhanced medical care since self-driving cars will be tested in greater numbers. Phew.

More about

TIP US OFF

Send us news


Other stories you might like