nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Hawking, Musk, Woz (and others): Robots will kill us all

'Kalashnikovs of tomorrow' - Yikes

By Lewis Page, 27 Jul 2015

Opinion Notables of the technology world including physicist Stephen Hawking, biz baron Elon Musk and techno-hippy Steve Wozniak have teamed up to warn us all about the menace of killer robots.

In an open letter and petition, the distinguished trio and their co-signatories warn:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions ... autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms ...

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

A Tomahawk cruise missile in flight, as launched from Royal Navy submarines. Credit: Crown Copyright/Royal Navy

Oh noes! An autonomous weapon! Which went into service in the 1980s!

The signatories already include eminent physicist Hawking, renowned early Appler Woz, and tech visionary Musk - who has various trendy green businesses such as Tesla and SolarCity, but is better known as chief of applecart-busting rocket firm SpaceX. There are also many on the list from the worlds of academic AI research, and various names from outside the field.

Probably the most famous non-techy name is that of actress Talulah Riley, noted for her turns in Inception (as attractive imaginary minx "Blonde Woman") and the recent St Trinian's reboot (as saucy schoolgirl minx "Annabelle Fritton"). A clue regarding her appearance as the sole representative of the luvvie world on the list may lie in the fact that she has been married to Musk twice, and they are said to be on good terms despite having got divorced for the second time last year.

Comment

As we've pointed out on these pages before, the trendy ongoing campaign to stop the autonomous weapons before they get built is too late: they have been in service for decades. The signatories fondly believe that cruise missiles - which are simply robot aircraft, usually turbojet planes rather than quadcopters, but aircraft for all that - are under human control when selecting a target, but they aren't.

A cruise missile, such as a standard Tomahawk or Stormshadow/Scalp, is autonomous from the moment it is launched. It flies to a location where its target is thought to be, but it does not simply crash on that location: it takes a digital picture at the scene and decides whether something that looks like a legitimate target is in the picture.

If the missile's software decides there is such a something, the target is struck - and one Tomahawk, equipped with many canisters of munitions which can be deployed separately, can attack multiple targets at different locations.

The Tomahawk is also being upgraded to strike moving targets, which - as with stationary targets today - it will identify as being legitimate targets on its own. It has always had to be able to choose its targets autonomously, as it will typically be up to 1,000 miles from its launcher and out of contact with the humans who fired it.

There's nothing magic about humans, especially if you're looking in radar or IR. And making Kalashnikovs is harder than you think. And hey - ever heard of the Wassenaar Arrangement?

The ability to autonomously identify and strike moving targets is not new, either. It has been built into various anti-shipping, land-attack and anti-tank missiles for several decades, necessarily so as they are often designed to be launched at targets beyond the horizon. Some of these missiles are already routinely used for targeted assassinations, in fact, especially the US Hellfire (nominally an "anti tank" job).

Corporate art of an armed Warrior-Alpha

The drone isn't autonomous. But the missiles certainly can be.

Various other anti-shipping, anti-tank, defence suppression and cruise missiles all select their targets autonomously by looking at various kinds of sensor input; often active or passive radar, often visual or IR imagery. Sometimes they are under human control, or are sic'd onto a particular imagery blob by a human operator to begin with, but it's hard to see why a human is going to be any better than a computer at telling which radar/IR blob is an ambulance and which is an enemy tank or a "technical" (armed pickup truck).

In a lot of cases, even when using visual wavelengths, the human will make as many mistakes as a machine, or will not have any margin of superiority that truly justifies putting him aboard the aircraft (or even the bandwidth needed to put him there virtually, often enough).

No: the autonomous weapon is here already. Major military powers have already pushed ahead with such things, for a long time.

So why aren't they the Kalashnikovs of today? Why aren't they ubiquitous and cheap for all significant military powers to produce?

Well, there are several things to note here. Firstly, though the Kalashnikov is potentially very simple and cheap to make, it is an excellent design for that - not particularly as a weapon, though it is very adequate in that regard. Its excellence is that it is very easy to make. Developing such a design is actually not nearly as simple as people think; witness the many much more expensive and inferior weapons which have been designed since. The Soviets supplied that design free of charge to pretty much anyone who wanted it back during the Cold War, meaning that the hard part was done for them.

Secondly, even with the hard part done, it is not all that simple to set up a factory that makes working Kalashnikovs cheaply. Major industrial powers can do it easily, but others can't. Venezuela, for instance, still doesn't have its Kalashnikov factory despite having decided to set one up ten years ago. Just having the AKM blueprints isn't enough; you also need a whole lot of Russian expertise.

Kalashnikovs are everywhere because major industrial powers (mainly the former Soviet bloc and China) made millions of them and shipped them everywhere, not because they are cheap and simple for just any country to make - though they are indeed cheap and simple to make, as assault rifles go.

Autonomous weapons are a good bit harder than Kalashnikovs to build: useful ones are mostly functional aircraft or guided rockets of one kind or another, and not poxy little battery-powered quadcopters with twenty-minute endurance and almost no payload, either. Major industrial nations on the level of Israel and above can already make them, and often do.

But they aren't making huge numbers of them, and they aren't selling them cheaply and/or giving them away in huge numbers to people in the Third World. Why not?

Because there is already an international pact which heavily restricts trade in such things - indeed, in all advanced technology with a use in weapons. It is called the Wassenaar Arrangement, and it has been around for twenty years, making sure that "terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc" don't get hold of any kind of advanced weaponry, "autonomous" or not.

So in fact the autonomous weapons are already here. And there is already a perfectly effective international ban on most governments and people having them.

And in fact "assassinations, destabilization of nations, subdual of populations and selective killings of a particular ethnic group" happen anyway and you don't even need Kalashnikovs - machetes work just fine.

So perhaps the problem isn't really the robots, killer or not.

Perhaps these are actually human problems. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing