nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Regulate, says Musk – OK, but who writes the New Robot Rules?

Cause, accountability, responsibility

By Marc Ambasna-Jones, 13 Sep 2017

When the Knightscope K5 surveillance bot fell into the pond at an office complex in Washington, DC, last month, it wasn’t the first time the company’s Future of Security machines had come a cropper.

In April, a K5 got on the wrong side of a drunken punch but still managed to call it in, reinforcing its maker’s belief that the mobile security unit resembling Star Wars’ R2D2 has got, err, legs. However, while a robot rolling the wrong way into a pool of water may not exactly be life-threatening, increased automation, robots and AI-enabled machinery will touch lives, from autonomous vehicles through to shelf-stackers in supermarkets and even home care assistants.

So, what happens when robots and automation go wrong and who is responsible? If a machine kills a person, how far back does culpability go and what can be done about it?

“Current product liability and safety laws are already quite clear on putting the onus on the manufacturers of the product or automated systems, as well as on the distributors and businesses that supply services for product safety,” says Matthew Cockerill of London-based product design firm Seymourpowell.

He’s right of course. Product liability and safety laws already exist – the UK government is unequivocal on the matter – but we are talking here about technology that can learn to adapt, that is taking automation outside of the usual realms of business. Surely this can throw up a different set of circumstances and a different set of liabilities?

“I’d expect, certainly in the short term, the major difficulties to be around determining the liability from a specific accident or determining if an automated system has really failed or performed well,” adds Cockerill. “If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”

Good question, although if a machine takes any life it is surely a fail. In this scenario, who would be to blame? Would developers, for example, be liable?

Urs Arbter, a partner at consultancy firm Roland Berger, suggests that in some cases this may happen. He says that in particular: “AI is reshaping the insurance industry,” and although he believes risk will decline with increased automation, especially with autonomous vehicles, “there could be some issues against developers.” Insurance companies he says are watching it all closely and although regional requirements will vary depending on local laws, there is, he says, room for further regulation.

Elon Musk would agree. A recent Tweet by the Tesla founder claimed that AI is now riskier than North Korea. He followed it up with another tweet saying “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.”

Easier said than done, but according to Chi Onwurah, UK Labour MP for Newcastle Central and Shadow Minister for Industrial Strategy, Science and Innovation, it’s not only Musk who has suggested that regulators and legislators need to consider AI. She points to Murray Shanahan (professor of cognitive robotics at Imperial College London), Chetan Dube (founder of IPsoft), Cathy O’Neil (author and mathematician) and many others, herself included, as believing that we need to reference AI in deciding how our regulatory and legislative framework needs to evolve.

“This is not ‘regulating against a potential threat,’ but protecting consumers, citizens, workers now and in the future, which is the job of government,” Onwurah told us. “Good regulation is always forward looking otherwise it is quickly obsolete, and the current regulation around data and surveillance is a prime example of that.”

She suggests there is a precedent too, referring to when communications regulator Ofcom regulated for the convergence of telecoms, audiovisual and radio before it happened.

“There was a long period of debate and discussion with a green paper and a white paper before the 2003 Communications Act was passed, with the aim of looking forward ten years and anticipating some of the threats as well as the opportunities,” says Onwurah.

“This government unfortunately has neither the will nor the intellectual capacity to look forward ten weeks, and as a consequence any AI regulation is likely to be driven by the European Union or knee-jerk reactions to bad tabloid headlines.”

Knee jerk is something we are used to – we’ve seen a lot of it recently in reaction to growing cyber security threats – but still, should we be going unilateral on this? Regulation seems a little pointless in the wider AI scheme of things if it’s not multilateral and we are a long way off that being discussed, let alone becoming a potential reality.

Starting small

It will probably start small, such as the UK gov’s recent guidelines (not regulations) on dealing with potential cyber-attacks on smart, connected cars, or the expected Autonomous and Electric Vehicles Bill to be introduced later this year aiming to create a new framework for self-driving vehicle insurance.

The fear of course is that politicians don’t go deep enough and leave plenty of easily exploited loopholes or even restrict everything to the extent it interferes in the development of AI. When you think about it, regulating against a potential threat, rather than reacting to an existing one, is unusual and perhaps unprecedented.

“Regulations that impede progress are rarely a good thing, if ‘we’ believe that progress to have an overall benefit to society,” warns Karl Freund, senior analyst for HPC and deep learning at Moor Insights and Strategy. OK, so what happens if something goes wrong? Would governments be held to account for not regulating?

“Perhaps an analogy might help,” explains Freund. “If your brake lights fail, and a car crashes into you, who is at fault? The other driver, right? He should have been more careful and not totally relied on the technology of the tail light. If the autopilot of an ADAS equipped car fails, we may want to sue someone, but I am pretty certain these systems will warn the driver that they are not fool proof, and that the driver engages the autopilot with that understanding.

And of course, most analysts, if not all, would agree that these systems will save thousands or even tens of thousands of lives every year once widely deployed, with a very small and acceptable error rate. Just like a vaccine can save lives, but a small percentage of patients may experience adverse side effects. But that risk is worth the benefit to the total population.”

Ah, the greater good. Freund makes an understandable point that there will probably be waivers, something that Arbter adds could lead to more personalised insurance policies with premiums to match. Arbter adds that this should not mean increases in prices, but you kind of get the feeling that someone, somewhere will pay for it all – those who probably can afford it least.

So if a machine goes wrong how will we really know the culprit?

According to Alan Winfield, professor of Robot Ethics at the Bristol Robotics Laboratory, part of the University of the West of England, this is where his ethical black box idea comes in. Robots and autonomous systems, he says, should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. The idea is that it can establish cause, accountability and responsibility in the event of an accident caused by a robot or AI-enabled machine.

“It should always be possible to find out why an AI made an autonomous decision,” says Winfield, referring to it as “the principle of transparency”, one of a set of ethical principles being developed by the IEEE.

No hiding then and that’s sort of the point here. Why should AI be treated any differently than humans? If the AI contravenes already established law, then the owner of the machine and the developer – if it’s proven that guidelines and regulations were not adhered to – should be held to account. If things do go wrong, someone must have to pay. That’s the system.

Aziz Rahman of business crime solicitors Rahman Ravelli agrees. While he believes the rise of technology and artificial intelligence does make large changes to the ways we work possible, he thinks that when it comes to AI and fraud, the risks have to be assessed and minimised by companies in exactly the same way as any more conventional threat.

“If we are talking about future situations where the technology is intelligent enough to commit fraud, this possibility has to be recognised and prevented. This means introducing measures that prevent one particular person – or robot in the future situations we are talking about – having the ability to work free from scrutiny. If there is no such scrutiny, the potential for fraud will always be there.”

You could take that further. If there is no scrutiny of AI, there will be chaos and developers of AI will no doubt be the target of any government intervention. There again, who will governments consult over regulation, to get an understanding of AI’s potential and limitations?

Developers, of course. ®

We’ll be covering machine learning, AI and analytics – and ethics – at MCubed London in October. Full details, including early bird tickets, right here.

The Register - Independent news and views for the tech community. Part of Situation Publishing