nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

AI no longer needs to fake it. Just don't try talking to your robots

Mankind's creations are almost better than the real thing

By Alistair Dabbs, 28 Jan 2016

By the early 22nd century, Mega-City One will stretch down the eastern seaboard from Montreal to Georgia. It will be home to some 400 million citizens. Almost all of them will be unemployed.

Judge Dredd’s vast satirical dystopian backdrop in the pages of 2000 AD is one of the comic’s most colourful settings. A predominant theme of life in the city is sheer boredom. All manual, retail, clerical and white-collar labour has been taken over by robots. With the exception of the usual handful of capitalists and criminals, practically everyone survives on benefits.

This is one vision of a near-future in which artificial intelligence and robotics have been developed to such a level that they both eradicate the drudgery of everyday human existence while at the same time removing all purpose from it. So how close are we to reaching this inevitable scientific goal?

From here, it still looks a long way off but the one thing holding it back is our limited commercial success in affordable, reliable, articulate and independently mobile robotics. They are nowhere to be seen outside universities and laboratories, which even so are full of amazing but slow, clumsy, fragile and mains-tethered demo units that have been outrageously expensive to develop.

The closest anyone not writing a thesis has got to anything resembling an independently moving and ‘thinking’ robot of the Mega-City One type is a robovac. It scuttles around your floor and sucks up dust without additional human intervention – and that’s it. It does not engage you in conversation or make your job redundant, nor is it likely to rise up one day against its human slavemasters.

Yet a robovac is an elementary example of AI and mechanics put into practice. Artificial intelligence does not mean an ability to speak and interact with humans but simply to be automated in an intelligent way. A robovac wakes itself up, recognises the difference between carpet and solid flooring, navigates the furniture and stores itself for recharging when it’s finished. The Dyson 360 Eye even plans out its own work strategy, knows its whereabouts in the room, and keeps track of the bits of floor still remaining to be cleaned.

Now take your robovac, swap the rollers for a set of rugged fat wheels, add a solar power unit and fit some articulated limbs – it’s a Mars Rover! Over-simplification aside, the comparison is fair. While there is a significant element of remote control involved, the one last mobile Mars Rover, Opportunity, has been allowed to do some of its own thinking ever since its Autonomous Exploration for Gathering Increased Science (AEGIS) upgrade. To maximise its remaining time on the red planet, and to reduce remote control effort, Opportunity itself can look around, recognise what it sees and determine which rocks to analyse and which to ignore.

Don’t underestimate the importance of wheels to early 21st century AI: the day of the self-driving car is almost upon us. It’s not just Google and Tesla but proper nuts-and-bolts manufacturers such as Nissan, Toyota, GM, Volkswagen and Audi who are serious about the idea. California issued its first test permits last September for autonomous cars on its public roads, and Nevada has given a licence to Freightliner to let its fully-autonomous lorry, Inspiration, onto the highway to send the wind up anyone who has ever seen Steven Spielberg’s Duel.

Statistics dullards have enjoyed pointing out that the 50 self-driving cars buzzing around California at the moment have been involved in at least four accidents already, noting that an eight per cent accident rate was higher than that for human drivers over the same period. However, further investigation reveals that two of these accidents were the result of other cars driving into them, and the other two took place while the driverless cars’ human occupants – a legal requirement at the moment – had chosen to take control of the wheel. Duh!

Will self-driving be the only type of car allowed on public roads in 50 years’ time? Don’t listen to the publicists, listen to the engineers – and the men with spanners and oily overalls are certain this is going to happen.

Even if you remain unconvinced, even when you disconnect the field of AI research from the baggage of mobile robotics, it becomes evident that AI has been making huge strides into everyday lives anyway, but simply out of public view. The most common of these, of course, are expert systems, which are designed to solve very specific problems. Expert systems have been in use for decades for data mining and information retrieval, not least for business information reporting, and this area continues to expand to meet the new demands of big data.

Classic data processing and querying were fine in their day but the introduction of fuzzy logic and research into neural networks raised the bar. Information retrieval systems use AI to hunt, compare, interpret and evaluate before authoring human-friendly reports.

Boring. Boring. Boring.

Boring, yes, but this is how modern antivirus detection and cyber-attack defence work. AIs are out there now, crawling the networks and quite literally looking for trouble. It’s also a reasonable analogy for the way engineers use AI for automated power grid management – balancing the loads, constantly looking for efficiencies and keeping checks on transformers they suspect are about to go “phut.”

Large retail and factory management systems also rely on AI, not so much for tracking inventories but for anticipating and channelling demand based on interpretation of known inventory traffic. Anyone who went online to purchase a book about healthy cooking after Christmas will have subsequently experienced AI in action as it bombards you with “related” suggestions for reading material about eating disorders, health scares, illness and death.

Healthcare is an area that lends itself to a bit of a helping brain. When it’s not participating in TV quiz shows, IBM’s Watson has been demonstrating how it can help doctors as a data analytics engine for diagnosing symptoms and determining appropriate treatments, making missed-symptom errors less likely.

The problem is that patient-facing AI is still a long way off, at least outside Japan. Everywhere else, the lack of any co-ordinated approach – also known as “shovelling shitloads of public cash into the furnace of fantasy private health sector IT investment” – means that it’s difficult to access the kind of comprehensive and up-to-date health records that AI needs to do its work effectively.

Japan, as ever, always turns AI in health care into something bonkers. Recombining a AI with robotics, Riken-SRK Collaboration Centre for Human-Interactive Robot Research in Nagoya has developed a heavy mechanical beast for helping wheelchair-bound patients into beds, baths and so on. Modelled to look like a white cartoon bear (of course) the 140kg meathead can see what it’s doing and react accordingly rather than accidentally crush care home patients to a bloody pulp. Besides, you can hire humans to do that.

The Robobear isn’t particularly smart, but it does bring us back to the subject humanoid and anthropomorphic robots with AI. These are about as common in people’s homes as a robovac, and the hype around Sony’s AIBO at the turn of the century never really caught on. It was when smaller, cheaper ripoffs that imitated AIBO – which itself was imitating a dog – appeared in toy shops for less than £50 that the public saw it for what it really was: a hairless, mobile Furby.

Some might argue that even a Furby is an example of AI since it is designed to respond to and learn from its owner. However, it doesn’t really “learn” so much as follow preset progressive rules that determine its character development when it is played with. Take out the batteries for a while to reset it, hand it to another child and Furby will plod through its preset “learning” performance all over again in the same way. Both AIBO and Furby were just next-generation Tamagotchis.

More people have experienced rule-based AI in the form of video games. The early games didn’t have this: they just launched into predetermined actions that players soon learnt by heart. Once you’d worked out which direction the Pac-Man ghosts always went, how the Asteroids floated and at what moment the Galaxians dropped down, winning high scores and replays was simply a matter of inevitability.

But by applying rules to various characters that made them act in different ways depending upon circumstance made things challenging at last. Baddies ignore you until you get close and sharpshooters only start firing when you are in sight and in range. Famously, Doom’s shuffling monsters would even attack each other if you drew them in. Even back in 2008, the likes of Far Cry 2 began making use of AI to exhibit behaviours in response to not just sights and sounds but the type of things the computer-generated characters see and hear, in relation to their own immediate and wider environment.

Not least, the gaming connection with AI has been sealed by graphic card manufacturer Nvidia, which is trying to become the preferred supplier of CPUs for self-driving cars. It’s all that experience rendering Grand Theft Auto, you see…

When IBM’s Watson beat human contestants at TV game show Jeopardy in 2011, it looked as if we were about to enter an era in which computers could reason with themselves according to wholly unexpected situations, albeit in a programmed environment. Thankfully, the human race can breathe one last gasp of relief, as the good old wetwear of card sharps showed it could still tie with the world’s most powerful poker program, Claudico, at heads-up no-limit Texas hold’em in Pittsburgh last year.

“Beating humans isn't really our goal,” smarmed professor Tuomas Sandholm of Carnegie Mellon University when the poker tournament results were announced, before adding more creepily: “It's just a milestone along the way.”

Games development and movie GCI have shared a lot of crossover technology. Arguably the most commonly witnessed application of unpredictably fuzzy (as opposed to rule-based) AI in games and movies has been to produce random behaviour in crowd sequences. A randomiser alone won’t do the trick: viewers will still see patterns of repeated movement here and there.

A celebrated, if possibly apocryphal, anecdote about the CGI work on the Lord of the Rings movies recounts that the programmers’ initial attempt at writing AI-based behaviour for the Orcs during battle scenes was too realistic: if the Orc AI perceived it was about to lose a fight, it would drop weapons and scarper. It is not a good look for the film when thousands of CGI Orcs keep buggering off halfway though the rendering. So the story goes, the AI had to be restrained to ensure the Orcs were sufficiently suicidal to remain in the theatre of battle long enough to be variously stabbed, maimed and decapitated by their CGI opponents according to Peter Jackson’s wishes.

Kinect this

Another AI-enhanced approach to computer gaming is seen in the latest generation of motion-sensing devices for consoles. Kinect 2 for the Xbox. It learns the way you want to use it and creepily recognises you though facial recognition, voice recognition and – creepily, full-body 3D motion capture.

Yet it seems that even gaming has begun to play second-fiddle to the most insidious form of AI known to the western world: intelligent chatbots, otherwise incongruously known as digital assistants. If they’re not wasting your time on support websites by failing to respond usefully to questions phrased “It don’t work,” they’re being equally unhelpful while suckering you in with silky voices over your mobile phone.

This, of course, is unfair criticism. The processing power required for the voice recognition and vocabulary alone would be too much for a mobile phone without having to over-ride everything else on it, so digital assistants are for the moment need an online data connection to work. When digital assistants go wrong, it’s almost always the connection or poor audio quality at fault.

That said, voice recognition still has some way to go.

With Apple’s Siri now rubbing shoulders with Google Now and Microsoft Cortana, all players are expected to step up the game in how they interact with user requests. For example, to date, Siri is at its best when operating built-in iPhone functions, such as playing a song you want to hear or texting a bunch of contacts. Google Now is better at understanding search requests such as: “Where’s a good place to eat lobster at 9pm?”

Newest on the block, Cortana has some catching up to do but makes an impressive effort to adapt itself to the way you speak, at least when you say your name. Adapting, as well as responding, is what smarter AIs should be doing in order to make themselves even smarter.

The worry, of course, is that all our interaction with digital assistants might be in the process of being culled for unrelated nefarious purposes. This was even the opening conceit of Ex Machina, suggesting that our computers and mobile devices are using their microphones and cameras to spy on us all the time, learning our facial expressions, listening to our vocal expletives and documenting our body language. This way, the machines and their commercial masters can learn how to better manipulate us – into parting with our cash first, then eventually parting with our jobs, then eventually our existence.

Recently, a number of tech industry and academic names have spoken of their concerns of allowing advances in AI to continue without at least some moral guidelines. Notably, Stephen Hawking and Elon Musk were worried about super AIs taking over, although Hawking seemed more disturbed by the notion of being rendered obsolete by physically agile devices, while Musk shared the Hollywood view of ultra-mental combat bots in the future blasting us all to atoms.

AI boffin Andrew Ng, for one, says he doesn’t care, arguing that if violent conflict arises between humans and machines, he won’t be around to see it anyway.

Much more likely in the short term is that machines will just continue to swallow more jobs. Soon enough, the solicitor, journalist and pharmacist will follow the typist into oblivion. This will lead to a rise in demand for programmers and electronics engineers… up until the point when the machines can tweak their own code and maintain each other without human help. Then all the programmers and engineers can join the dole queue, and we’d be living the Mega-City One scenario. Thatcher would be proud.

At some point, perhaps, no-one will have a job any more, which would mean no-one would have any money to buy or run the machines anyway, and the world economy would be forced to reboot.

In the meantime, the scariest metal mickey that the robot army can muster so far appears to be those self-service scanning tills in supermarkets. If the worst threat they can scream out is “Unexpected item in bagging area”, robot armageddon, even when mused by someone whose rarely voiced opinion is actually worth listening to, such as Steve Wozniak, looks very distant indeed.

It’s enough to send you futsie. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing