This article is more than 1 year old

Are Asimov's laws enough to stop AI stomping humanity?

Data, and who has it, is the real concern

Blade Runner, the film inspired by Philip K Dick's book Do Androids Dream of Electric Sheep?, is 35 years old this year.

Set in a dystopian Los Angeles, the story centres on the tracking down and killing of a renegade group of artificial humans – replicants – escaped from space and trying to extend their lifespans beyond the built-in four years.

The story is set in 2019. Surprise, we aren't exactly there in terms of the future Hollywood envisioned in 1982 so it's seized the opportunity to hurl the idea even further into the future, 2049, with a new telling.

Opportunism, sure, but while we are a million miles away from flying police cars and sushi bars, the stirrings of AI are evident. And so are the concerns. The Elon Musk school of thought believes that AI is scary and the robots will kill us all. Musk last weekend was tweeting of the need for AI to be regulated.

It's the reason Musk helped found OpenAI, a non-profit AI research company with a mission to build a safe AI path. There is another other school of thought, however, led by Facebook chief Mark Zuckerberg. No, they say, AI is not scary and the robots will help us (at least Facebook and its hyper-targeted marketing and site automation). It has led to a well-publicised spat, if you can call it that, but who is right?

"Fight!" we say, to borrow a Harry Hill approach to diplomacy. Perhaps there's truth in both schools, but are we really dealing with truth here? Isn't this all just a bit sci-fi and geeky?

"It's rather tedious," says Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England. "Are either of them [Musk and Zuckerberg] AI developers or researchers? I don't really share Musk's concerns about an existential threat from super intelligence as we have no idea how to build super intelligence yet. We can't obsess about this."

While Winfield suggests that "some serious researchers" do have legitimate apprehensions, a more pressing concern is the fact that the AI landscape is dominated by a small number of very large firms.

"My primary concern is the potential for increasing wealth inequality and the impact this could have on society," says Winfield, adding that the private sector is benefiting yet again from research funded primarily from taxpayers' money. "Automation should help create wealth that is shared by all."

So, can we in any way build this into a set of rules for AI, robots and roboticists? Do these rules already exist?

In the back of everybody's mind – of course – are Isaac Asimov's three laws for robotics. It's Asimov's writing 75 years ago that established the code of conduct for AI and robots.

To remind you:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

These rules clearly saw robots and AI as servants of man, and seven decades later they still hang over the industry. It's interesting how works of fiction hold such weight over how we consider robots today, and yet there are many actual guidelines already in existence with others in progress.

The EPSRC Principles of Robotics, British Standards' BS8611, IEEE global ethics initiative and IEEE P7001 – Transparency in Autonomous Systems are some of the current guidelines being used by the industry. Winfield admits they are heavily academic, with perhaps the exception of the IEEE's work. There certainly needs to be some accountability, while an idea presented by Winfield and Professor Marina Jirotka from the University of Oxford's Department of Computer Science arguing the case for an "Ethical Black Box" is an interesting one, especially when you think of AI and robots being used in autonomous vehicles.

So are the guidelines enough? As the big firms, such as Google and Softbank, jostle to advance research and development, can these frameworks keep them in check or are they living by their own rules?

"We need to make sure companies work within the frameworks," says Winfield, adding that firms need "ethics boards or advisors" to keep an eye on what businesses are doing, particularly when it comes to AI and robotics.

DeepMind, an AI research business bought by Google in 2014 for £400m, has recently talked at length about the interplay between neuroscience and AI, calling for the two fields to work together more closely to "bolster our quest to develop AI" and "better understand what's going on inside our own heads".

In September last year DeepMind – along with Amazon, Apple, Facebook, IBM and Microsoft – founded Partnership on AI, a sort of ethics and PR committee tasked with promoting safe development. Its "Thematic Pillars" are a little fluffy. They say the right things on safety, society and social good but perhaps the organisation's best work is yet to come. Bringing together the industry seems like a good idea – in May it announced a bunch of new partners and institutions joining to engage in dialogue – but there's a long way to go if this is to become something more than just an exclusive club of vested interests.

The fear of course is that a number of similar bodies pop up and we start seeing differences of opinion and approach – God forbid we get a VHS/Betamax standoff – and that this could lead to breaches in safety and security. It's not such a leap to see how, despite all the good intentions, work being developed by existing or future AI firms could be corrupted.

Interestingly, DeepMind and OpenAI have collaborated on research into an algorithm that can reinforce machine learning, based on human preferences. Reinforcement learning has already been used to teach machines how to play games like Pong and use driving simulations. "Knowledge" is acquired by chasing rewards based on human judgement. Assuming that human judgement is sound, the machine will learn to improve its own decision making, choosing the "better path" that will lead to human approval.

It's not foolproof of course. It's still early days but it is the start of developing a process by which safety can be inherent in any AI capability – if indeed all AI companies either now or in the future buy into it. To that end, the idea of a global standard seems crucial but, you would expect, extremely difficult to draw up and obtain wide adoption.

OpenAI agreed.

"Broadly, safety is a young field in this part of AI," a spokesperson told The Reg. "It needs to mature more before we talk about stuff like standards, adoption of best practices, and so on," adding that global standards are "a long way off".

So what do we do about the inevitable surge in innovation and startups that will come in the next few years? How do we bind them to a set of principles if we don't develop parameters now? Maybe this in itself is an opportunity, although according to Ray Chohan, senior vice president of corporate strategy at patent experts PatSnap, there has been little action in terms of ethics related invention on the patent front.

"The number of new inventions that specifically address the need for ethics-aware AI are thin on the ground," he says. "One patent filed in May 2017 proposes a 'Neuro-fuzzy system', which can represent moral, ethical, legal, cultural, regional or management policies relevant to the context of a machine, with the ability to fine-tune the system based on individual/corporate/societal or cultural preference. The technology refers to Boolean algebra and fuzzy-logic. While it appears there are some people who are looking into this area, much more investment will be needed if real progress is to be made."

Chohan adds that Natera, which is involved in genetic testing, filed the most patents for AI and machine learning related ethics or morality, mainly because it is concerned with the use of extremely personal patient data.

For the moment, at least, this will be our entry point – our introduction to AI. While AI may not yet be true AI and have the ability to compete with us emotionally and learn to actually deride us for being human beings, the companies working in this field may be able to mess with our personal data and privacy.

When you consider the main protagonists – Google, Facebook, Apple et al –alarm bells should start ringing. These companies are obsessed with our personal data and would stand to gain most from any AI-driven strategy to harness it and use it more intelligently. According to Chohan, the number of artificial intelligence patents concerning ethics or morality spiked in 2014 – when the European Union's General Data Protection Regulation was being widely discussed, particularly the "right to be forgotten".

This is perhaps of more immediate concern than whether or not AI-enabled robots will turn on us. ®

More about

TIP US OFF

Send us news


Other stories you might like