This article is more than 1 year old

Teach undergrads ethics to ensure future AI is safe – compsci boffins

Read sci-fi, kids! Save the world from killer robots!

Universities should step up efforts to educate students about AI ethics, according to a panel of experts speaking at the AAAI conference in San Francisco on Monday.

Machine learning is constantly advancing as new algorithms are developed, and as hardware to accelerate computations improves. As the capabilities of AI systems increases, so do fears that this progressing technology will be abused to trample on people's privacy and other rights.

Sure, there are magazines and blogs full of academics wringing their hands about seemingly impossible conscious computers wrestling with moral dilemmas. But before we get to that point in AI development, though, there are still modern-day practical problems to consider. Say, when a program decides which medication you should take, shouldn't you be able to pick apart how it came to that conclusion? What if the prescription is based on an paid-for bias in the model in favor of a particular pharmaceutical giant?

When a machine harms a person, who is at fault? How do you, as an engineer, design your system so that a machine doesn't hurt or cause damage?

Several groups such as the Partnership on AI and The Ethics and Governance of Artificial Intelligence Fund have spawned to try to keep tech in check. More direct than that, though, undergrads should be made aware of the moral and ethical issues surrounding technology; good practices should be drilled into the next generation of engineers, the conference was told.

Robots are particularly worrying. It’s already difficult to explain decisions made by algorithms, but when they are applied to physical machines capable of directly affecting the environment, it’s no wonder that alarm bells are ringing.

More robots and AI are functioning as members of society, Ben Kuipers, a professor of computer science and engineering at the University of Michigan, said.

“We worry about robot behavior," he told the audience. "With no sense of what’s appropriate, and what’s not, they may do great harm.” Prof Kuipers uses the example of Robot from the sci-fi comedy flick Robot & Frank, who willingly lies and breaks the law in pursuit of its goals.

Even if the robot’s missions are “human-given top-level goals,” it will create subgoals and execute them in unexpected ways to fulfill its main task. To design robots to be trustworthy, a solid grounding in engineering is not enough – philosophy is needed.

Prof Kuipers pointed to the theories of utilitarianism; deontology; and virtue ethics to find useful clues for ethical theories.

Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, agreed. On his online robotics and ethics teaching guide, he wrote: “First, students need access to formal ethical frameworks that they can use to study and evaluate ethical consequence in robotics well enough to make their own well-informed decisions. Second, students need to understand the downstream impact of media-making well enough to help the field as a whole communicate with the public authentically and effectively about robotics and its ramifications on society.”

But rigid ethical frameworks aren’t always the best way to model moral problems in AI, Judy Goldsmith, a professor of computer science at the University of Kentucky, told the audience.

“Case studies are rarely memorable, emotionally gripping or subtle. There is no character development and often there’s a right answer,” she said. Prof Goldsmith prefers science fiction as it provides a “rich vein for ethical dilemmas” and an “emotional connection to stories makes discussions memorable when real-world dilemmas arise.” ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like