This article is more than 1 year old

Robot overlords? Pshaw! I ain't afraid of no AI – researchers

Get a hold of yourself, Elon Musk

Artificial intelligence is all the rage in technology and as it progresses at a dizzying pace, the industry is at danger of being overhyped, say researchers.

Speaking at a symposium about AI in bioscience, ex-IBM Watson veteran and head of technology at benevolent.ai, Jérôme Pesenti, warned the audience to have a healthy scepticism about developments in AI.

There are great achievements in AI, but in reality the technology is probably “two steps behind PR”, Pesenti said.

Overselling AI can lead to bogus demos and give unrealistic expectations, he added. “People start with the expectations that AI can do everything. But nothing can do everything - even Watson had its limits.”

IBM too often makes wild claims, said Alexander Linden, a machine learning analyst at Gartner. “Marketing is getting ahead of itself. When people say IBM Watson will understand the world, that’s when I start rolling my eyes,” Linden told The Register.

Technology companies often jump onto the the next trend without understanding it just to stay relevant, he said.

Overhyping brings positive and negative effects to the industry. Scaremongering can trigger anxiety and bring disappointment when AI doesn’t live up to its claims. But the positive effect is that it can attract talented researchers to AI, Linden said.

Not so pretty: Tweaked models

Researchers have to be careful of overfitting data in machine learning. Tweaking models to make the system make the right predictions should not be passed off as AI.

“It’s like knowing the answers before a test. A real test for AI is not knowing what the questions are going to be. Until you do that, you will never know how the AI performs,” Pesenti told The Register.

Zoubin Ghahramani, a professor of information engineering and a machine learning researcher at the University of Cambridge, said overfitting becomes a problem if researchers are not careful with the data used to train learning models.

Machines learn from data and find patterns and predictions in order to make well-informed predictions. But overfitting is like “people over-generalising from experiences they’ve had. It leads to jumping to conclusions too early.”

“Machines learn from the many parameters it has set for its learning algorithm. The parameters are like knobs, and these are tuned when the machine trains from input data. Machines usually have thousands or millions of parameters. And overfitting is when the machine has too many knobs to tune and is given too few data points,” Ghahramani told The Register.

As a result, the patterns found during training can’t be found in new data, and the machine is not good at handling new data.

Photo credit: Shutterstock

The risk comes from being careless, when researchers think the training dataset used is large enough or representative of the situation it is trying to model.

It becomes an even greater issue when researchers cherry-pick their results, explained Leslie Smith, a professor of computing science at the University of Stirling.

“When the issue gets harder is when the researcher reports only the best results that they got - selecting the training subset, and the testing subset so that their results look good, and not reporting the fact that they had to try 100 different divisions of the datasets before they got these results,” Smith said.

Tech companies could be better at explaining the uncertainty and the limits of the model in their research, Ghahramani said.

No robot overlords

The main area where AI's abilities are misrepresented is how close it is to human intelligence.

Joanna Bryson, a researcher interested in AI intelligence at the University of Bath, is often frustrated with the threat of “robo-killers”.

“AI is an incredibly powerful tool and is already changing our lives. It performs better than a human at many things, but it’s nothing like a human. And it’s not going to take over the world,” Bryson told The Reg.

“As humans it’s natural to apply human aspects to things we create. But it’s unlikely that we will build human-like AI. Even from a philosophical aspect, it has too many moral hazards,” Bryson added. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like