Meet the man who inspired Elon Musk’s fear of the robot uprising
Nick Bostrom explains his AI prophecies of doom to El Reg
Exclusive Interview Swedish philosopher Nick Bostrom is quite a guy. The University of Oxford professor is known for his work on existential risk, human enhancement ethics, superintelligence risks and transhumanism. He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.
But he’s perhaps most famous these days for his book, Superintelligence: Paths, Dangers, Strategies, particularly since it was referenced by billionaire space rocket baron Elon Musk in one of his many tweets on the terrifying possibilities of artificial intelligence.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.— Elon Musk (@elonmusk) August 3, 2014
Prophecies of AI-fuelled doom from the likes of Musk, Stephen Hawking and Bill Gates hit the headlines earlier this year. They all fretted that allowing the creation of machine intelligence would lead to the extinction or dystopian enslavement of the human race.
References to The Terminator and Isaac Asimov abounded and anxious types were suddenly sweating over an event that most researchers reckon won't happen until somewhere between 2075 and 2090.
With these dire prophecies in mind, many have read Bostrom’s book as another grim missive, unremittingly pessimistic about our future under our machine overlords.
I'm sorry, Dave, I'm afraid I can't do that
Prof Bostrom tells The Register he’s not the pessimist that many have made him out to be, however.
“I think I have a more balanced view, I think that both outcomes are on the table, the extremely good and the extremely bad,” he says.
“But it makes sense to focus a lot on the possible downsides to see the work that we need to put in – that we haven’t been doing to date – to make sure that we don’t fall through any trapdoors. But I think that there’s a good chance we can get, if we get our act together, a really utopian future.”
In fact, Bostrom’s book isn’t a cut-and-dried analysis of how any machine intelligence would likely be an evil megabot intent on wiping out the human race. Much of the book focusses on how easy it would be for a machine intelligence to believe itself to be happily helping the human race by accomplishing the goal set out for it, but actually end up destroying us all in a problem he calls “perverse instantiation”.
For example, if we programme our AI to do something simple and narrow, such as manufacture paperclips, we could actually be setting ourselves up for a universe composed of nothing but paperclips.
What we mean is that we want the AI to build a few factories and find more efficient ways of making us money in our paperclip venture. But if the AI were to achieve superintelligence, which Bostrom believes is inevitable once it reaches human-level intelligence, and be totally focussed on making paperclips, it could end up converting all known matter into making paperclips. What to us appears entirely maniacal behaviour makes perfect sense to the AI, its only goal is to make paperclips.
'It’s not clear that our wisdom has kept pace with our increasing technological prowess.' – Bostrom
If we were to try for something a bit more complex, such as “Make humanity happy”, we could all end up as virtual brains hooked up to a source of constant stimulation of our virtual pleasure centres, since this is a very efficient and neat way to take care of the goal of making human beings happy.
Although the AI may be intelligent enough to realise that’s not what we meant, it would be indifferent to that fact. Its very nature tells it to make paperclips or make us happy, so that is exactly what it would do. This is just one example Bostrom gives of how hapless humanity could end up engineering its own destruction through AI.
There are many more, including the issue of who’s doing the programming.
How to solve a problem like paperclipped dystopia
Even if we come up with a way to control the AI and get it to do “what we mean” and be friendly towards humanity, who then decides what it should do and who is to reap the benefits of the likely wild riches and post-scarcity resources of a superintelligence that can get us out into the stars and using the whole of the (uninhabited) cosmos.
“We’re not coming from a starting point of thinking the modern human condition is terrible, technology is undermining our human dignity,” Bostrom says. “It’s rather starting from a real fascination with all the cool stuff that technology can do and hoping we can get even more from it, but recognising that there are some particular technologies that also could bring risks that we really need to handle very carefully.
“I feel a little bit like humanity is a bit like an infant or a teenager: some fairly immature person who has got their hands on increasingly powerful instruments. And it’s not clear that our wisdom has kept pace with our increasing technological prowess. But the solution to that is to try to turbo-charge the growth of our wisdom and our ability to solve global coordination problems. Technology will not wait for us, so we need to grow up a little bit faster.”
Bostrom believes that humanity will have to collaborate on the creation of an AI and ensure its goal is the greater good of everyone, not just a chosen few, after we have worked hard on solving the control problem. Only then does the advent of artificial intelligence and subsequent superintelligence stand the greatest chance of coming up with utopia instead of paperclipped dystopia.
But it’s not exactly an easy task.
“It looks like the thing that could help the most is to do more research into the control problem. Other things that would be helpful, like more world peace and harmony, would be great, it’s just harder to see how three extra people or an extra million dollars in funding would make a material difference to the amount of peace and harmony in the world, he says.
"So on the margin, it looks like money going to the control problem would be well spent."
Even things you would expect to help humanity towards becoming wiser and better people, such as greater global wealth, could be a double-edged sword when it comes to artificial intelligence.
“In general, economic growth does take some of the pressure off and make us more decent. Whether that economic growth comes from extracting more resources here on Earth or in space or making more efficient use of them doesn’t make much difference," Bostrom argues.
"Historically, there seems to be a correlation between countries becoming richer and having better rule of law and, in many ways, various metrics of civilisation have improved. So in that sense, faster economic growth is desirable.
“But on the other hand, it might speed the advance towards AI in that faster economic growth might lead to more investment in AI and computer science and that might give us less time to get our act together. The effect of the rate of economic growth on AI risk is ambiguous and hard to be sure about,” he points out.
Musk try harder – bring on the brains
Right now, Bostrom reckons it’s premature to be getting governments involved in international symposiums or global treaties. What’s needed is targeted research into issues like the control problem, decision theory – how the AI will make choices, and other problems that boffins are only just starting to grapple with. And if the researchers could be moralistic altruists as well, that would be terrific.
“People like Musk and Hawking are valuable mainly because they draw attention to the issues and can funnel resources. In the case of Elon Musk, he actually gave $10m to fund research in this area, which is extremely welcome.
SpaceX boss and AI-fearing Elon Musk takes a stroll with U.S. Pres Barak Obama
“But in terms of the people actually doing the research, what we need are highly talented people with mathematics backgrounds, theoretical computer science backgrounds, maybe some philosophy, working closely with practitioners in the field of AI, computer science and machine learning,” he says.
“Another variable obviously is that one would want the field to attract people that actually care about the long term future for humanity, that have the greater good at heart, as opposed to people that just want to make a quick buck or have some partisan interest.
“The combination of great cognitive power and altruistic motivation would be ideal and the more of those people that get into the field early, the more I think the culture of the field will be shaped,” he adds.
But what Bostrom doesn’t want is for research into AI to stop. He’s not trying to doom-say us out of technological progress. Rather, he just wants to make sure that the field is thinking about all of the risks.
“There’s a delicate balance there. It’s not so much doom-saying that AI will create a catastrophe and we should stop doing it, it’s more saying that, hey, there are problems here that nobody seems to be paying attention to.
“If we actually succeeded in creating machines that were intelligent, how would we ensure that they would be controlled and friendly? That’s a big problem that needs to be solved, but it’s been almost completely ignored until recently," the prof says.
"That’s really the message that we’re trying to put out there, which is quite different from saying technology is bad, let’s stop.” ®
Nick Bostrom is a philosopher at the University of Oxford and you can find out more about superintelligence, transhumanism and how we’re all living in a computer simulation on his webpage.