This article is more than 1 year old

We're all saved. From the killer AI. We can live. Thanks to the IEEE

Rest assured, your personal software agent has been tested for loyalty

Amid renewed calls to regulate AI before it wipes humanity from the planet, The Institute of Electrical and Electronics Engineers (IEEE) has rolled out a standards project to guide how AI agents handle data, part of a broader effort to ensure AI will act ethically.

Elon Musk, CEO of Tesla and a few other companies, over the weekend repeated his belief that AI represents "a fundamental existential risk for human civilization" and should be regulated, only to see his premise undermined the next day by hapless security robot tumbling into a fountain.

Then on Tuesday, at a hearing to be reconfirmed as vice chairman of the Joint Chiefs of Staff, Gen. Paul Selva, told the Senate Armed Services Committee that Department of Defense rules requiring human oversight of automated systems capable of killing should be renewed.

"I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life," Selva said, even as he promised a raucous debate among military leaders about the policy. Adversaries, he suggested, may attempt to make their future AI weapons more salable by omitting controls.

Personalized AI agents may present be less troubling than algorithmically-driven killing machines, but they're part of the same continuum: software that makes decisions of consequence.

"With the advent and rise of AI there is a risk that machine-to-machine decisions will be made with black-box inputs determined without input transparency to humans," the IEEE explains in its announcement.

It appears we have reached a tipping point. With semi-autonomous cars hitting the roads, automation affecting jobs, and AI assistants finding a place in many households, the consequences of algorithms have become matters of broad concern.

The Dark Side

The technically inclined have fretted over the dark side of automation and technology for as long as technology has been a thing. Isaac Asimov's three laws of robotics offer a relatively recent example of such fears; Mary Shelly's Frankenstein explored similar territory. Now, legislators are being asked to turn worry into law.

Industry-driven guidance may not amount to much. Witness the ineffectiveness of Do Not Track.

However, John C. Havens, executive director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, contends that the group's focus on transparency and consensus will encourage the adoption of the IEEE's recommendations.

"The industry at large is now almost universally recommending ethical or responsible design for AI," he said in an email to The Register, pointing to the Partnership on AI as an example. "Organizations not adopting principles along these lines risk public, customer and stakeholder scrutiny by not provably aligning their AI design and manufacturing with specific principles... Internal ethics boards don't hold water any longer in the public's eyes [as far as AI is concerned]. As a community we owe it to users and the general public to demonstrate best practices to adopt the types of principles we've created."

Havens said the IEEE P7000 Standards aim to allow organizations to demonstrate that their products conform to a high level of ethics.

"Today much of tech is produced focusing on the increase of exponential growth, only utilizing financial metrics as indicators of success," said Havens. "But environmental and societal issues including mental and emotional health issues are of paramount importance ... to ensure these amazing technologies proliferate in ways that are evenly distributed to all." ®

More about

TIP US OFF

Send us news


Other stories you might like