nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

El Reg drills into chatbot hype: The AIs that want to be your web butlers

So many things to solve, eg: how can there be conversation without memory?

By Katyanna Quach, 18 Jan 2017

Analysis “Alexa, are you the best chatbot in town?” “Sorry, I don’t understand the question I heard,” she replies.

Alexa doesn’t know. Nobody does. For a while, Apple had the lead with Siri: the virtual assistant first appeared in October 2011 on the iPhone 4S. Fast-forward five and a bit years and now every major tech player has one of these chatbots, or is in the process of developing one.

Amazon and Google have gone head-to-head with Alexa-powered Echo and Google Home speakers – both voice-controlled assistants. Microsoft has Cortana on its computers, and Apple has Siri on its Macs. On mobile phones, Apple will have another rival to deal with, as ex-Siri developers are in the process of building a virtual assistant for Samsung’s handhelds.

With the rise of smartphones, the number of people accessing the internet through desktop computers has plummeted. Figures from StatCounter, a web traffic analysis tool, show that mobile and tablet internet usage (51.3 per cent) has overtaken desktops (48.7 per cent) for the first time worldwide. Once upon a time, people used mice and keyboards to access the web, then touchscreens, and next, well, could it be voice? Rather than pull up your favorite news website, you'll simply ask out loud to your phone or speaker: what's happening in Linux today?

Coupled with the fact that speech recognition systems perform slightly better than professional transcriptionists, the boom in interest for chatbots that you can hold a conversation with seems natural. As people move to phones and home assistants, and away from desktops, bots will become the new way a lot of folks will access information, query services, request stuff, ask for help, use the internet, control computers, and so on – or at least that's the dream.

“It’s something I call the Big Bot Battle,” Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told The Register. With web searches being directed through chatbots, the company that builds the most popular robot will be the new gateway to the digital world, Etzioni said, like a “concierge for the internet.”

“The stakes are super high. It’s a trillion-dollar industry. But it’s still the beginning of the race so the jury’s still out,” Etzioni added.

Dan Gailey and Nathan Ross, cofounders of machine-learning startup Radbots, agree.

"Chatbots could totally be a trillion-dollar industry," Gailey told The Register. At Radbots, Gailey and his team are interested in bridging the gap between chatbots and advertising: slipping ads into conversations when the AI feels it's relevant and least likely to irritate the user.

The advantages of using chatbots, at least from the service provider side, is that the adverts can be targeted to match users' needs given the ongoing conversations. "On the web, adverts are fighting for attention, but on chatbots the user gives their undivided attention," Gailey says. The software analyzes the content of chatter to recognize what and when to advertise, and it's an obvious step towards monetizing chatbots.

Technology these days is polarized: you either become a huge success or a footnote. Snagging that big success will depend on whether or not you can build a chatbot that appeals to everyone – one that is universally useful. And building a truly useful chatbot is hard.

People quickly lose interest in chatbot apps when they realize the assistants are still woefully inadequate. As we've said many times on El Reg, a lot of today's AI systems are smart if you're prepared to act dumb.

If someone's lost, it’s easier and quicker to just go straight to Google Maps rather than consulting Google’s assistant Allo. The robot doesn’t give you any instructions, it just sends a link to Google Maps anyway – so why use a middleman if you can cut to the chase? (Also, how are you supposed to use a voice-controlled app in a noisy room of people, especially if they're also shouting at their devices: touchscreens and keyboards suddenly look like a godsend in that case.)

Another problem lies in the dialogue systems. Today's models are rigid and can’t understand our idioms and natterings well enough to communicate effectively. They’re just information retrieval systems – ask a simple question and you’ll get a simple answer.

Unlike most AI applications, chatbots aren’t silently humming away in the background crunching numbers and making predictions from stats – they’re at the forefront directly interacting with people, and will have to adapt and become more human-like if they’re to win normal folk over.

AI has to be good enough so 'people can build a rapport with the machine'

Natural language processing and AI emerged in academia in the 1950s, and are rollicking in a massive boost in funding from the private sector.

Amazon is luring AI students with the promise of cash prizes if they find ways to make Alexa smarter. Twelve teams from various universities have been chosen to take part in the inaugural Amazon Alexa prize. Each team receives a stipend to build their bot, and the winning team will receive $500,000.

The real prize, however, is the bonus $1m that will be given if the winning team shows that their chatbot can speak “coherently and engagingly with humans on popular topics for 20 minutes.”

Two teams are from Carnegie Mellon University, and are led by Alex Rudnicky and Alan Black, both professors with long white beards who have dedicated a significant amount of time to finding ways to give machines a voice and a mind.

Computers don’t have brains – they can’t think and they lack common sense – and they don’t understand and learn language in the same way humans do. Frederick Jelinek, a prominent natural language processing researcher, famously said: “Every time I fire a linguist, the performance of the speech recognizer goes up.” It all boils down to clever engineering, and coming up with the most effective ways to model human communication.

“The main problem is that humans are very good at chatting; they’re good at talking about things that don’t have a specific goal. But with machines it’s harder: you wouldn’t say to Cortana where’s the best place to get a coffee, you’d say where is the nearest cafe?,” Black told The Register.

“Humans ask Cortana or Alexa a very targeted question and it gives an answer. It’s not necessary fun, it’s not making people want to use this ... and they have no affiliation to any particular personal assistant. They need to have a more natural conversation, so people can build a rapport with the machine and feel more content with it.”

It helps to train your model on large datasets containing real conversations such as online forums or movie scripts. Computers then use pattern recognition to learn how certain combinations of words are associated with one another, so it can match it with appropriate responses.

There’s a limit to how well that will work, however, Black says. To really push chatbots to become more human-like, it “needs to know an awful lot more about the world beyond referencing the weather. It needs to understand humans and predict how they will act, what they should do to build a useful relationship.”

Rudnicky agrees. “Humans come packed with experience about the world, and machines need that knowledge too. It needs to keep track of what’s going on rather than just focusing on content,” he said.

Can communication be reduced to computations?

The first step and the last step of building a successful chatbot has been reached. Computers already recognize and process speech well and are on their way to sounding much more natural when they speak. But the middle stage of understanding and reasoning from information still needs work.

Part of the problem is that chatbots lack a memory component. Conversations work on a turn-by-turn basis and it only really remembers the last message sent.

Making bots more human-like isn’t the only way to make them useful, Robert Dale, CTO at Arria NLG – a UK-based company that uses natural language generation to provide useful insights from data – told The Register.

Adding components so that a chatbot can do more tasks is another way. It’s why Amazon kickstarted the Amazon Alexa Portal and has opened the device up to third-party developers.

Dale has been involved in natural language generation for nearly thirty years, and says there are many unsolved problems. The main issue is that no one has come up with a good system to make computers understand text and the world around them.

“Chatbots are riding on the wave of AI, but I’d argue that there isn’t much intelligence behind them right now. We need to understand how humans communicate first in order to replicate that in machines,” Dale said.

The process of deriving meaning from abstract speech or writing is so natural to humans – yet it's mysterious and difficult to describe, let alone reduce to computations.

Nobody knows how human-like machines will have to be in order to hold conversations. But the line is drawn before any real discussion of the c-word – consciousness. Dismissing the idea of a machine like HAL-9000, both Rudnicky and Black shake their heads at the idea of machines having to be self-aware before they can talk like humans.

“In the future, computers will be advanced enough to converse naturally like humans, but we will always be able to tell the difference,” Rudnicky said. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing