We Have to Develop Scalable Methods for AI Control so it Remains Aligned With Human Values - Prof.

As artificial intelligence technology is becoming more intelligent and integrated into our lives, teaching machines about human ethics is an important task according to Nick Bostrom, an Oxford University Professor and the head of the Future of Humanity Institute that’s also home to the Center for the Governance of Artificial Intelligence.
Sputnik

Professor Bostrom spoke about AI and its safety on the sidelines of Russia’s main event in the field of technological entrepreneurship - the annual Open Innovations forum - held by Skolkovo’s Innovation Center.

Sputnik: In what area can we expect to see “unicorns” in the future?

Nick Bostrom: They can crop up in any part of the economy really, as long as the sector has a certain size that could bring new innovations and new ideas that would be successful enough to become worth a billion.

Sputnik: What areas in particular?

Nick Bostrom: I think the highly salient areas which are sort of big ambitions tech things on the Internet. But a lot of other parts of the economy that also are quite large and have less of a public profile. But that might mean maybe there’s less competition there. Like if all talented people want to build the next Google or something, maybe it means that big sectors of the economy which are ripe for a talented person to come in and start something.

Sputnik: What industries and businesses will lead the economy in 10, 20, 30 years? What technologies are worth investing in in the future?

Nick Bostrom: There are obvious trends as much as more and more of our lives are online and the digital is just constituting a larger and larger fraction of our world. So all sides start more and more other things that traditionally we don’t think of such as digital industries, like clothing retail are becoming integrated with that, and I think that this trend will continue to unfold, but that then becomes a very broad colouration of the whole economy rather than a distinctive thing. So, I think if you just look at the different places where people are spending money, like housing, transportation, tourism, food, entertainment, etc. I think in each of those [areas] there will be room for innovation and entrepreneurship, and using tools from artificial intelligence, machine learning to create new ways of delivering value to the consumer.

We Have to Develop Scalable Methods for AI Control so it Remains Aligned With Human Values - Prof.

Sputnik: If humanity does not develop methods to protect ourselves against artificial intelligence, what could the negative consequences for humanity be? And how soon should they be expected?

Nick Bostrom: I think the risks depend a little bit on the timeline that we’re talking about. So there are near-term issues and then longer-term issues. Near-term I think one has to do with how these techniques affect so-called dynamics. So if you have these big digital platforms, whether it’s Facebook or Google or big surveillance systems being developed, how will that affect what information people see, how will it affect politics and it might be for the better but we can’t really predict it, so there’s also a possibility that it could be for the worse. And also the rise of autonomous weaponry. So drones that have facial recognition capabilities, I think it’s going to emerge as an issue. Initially, may be developed by militaries but you can imagine them spreading to terrorists and criminals and so forth, and becoming very easy to target people remotely by using this.

I think longer-term there’s also the question of how we can ensure that AIs become more and more intelligent, remain aligned with human values, human intentions, so we have to develop scalable methods for AI control. This is a technical research challenge, but ultimately we would want to be able to build super-intelligent machines that still are on our side, that still are faithfully carrying out things on our behalf or executing our intentions. So there are some open technical research problems there that will need to be solved.

Sputnik: What are you currently working on in terms of security for artificial intelligence?

Nick Bostrom: My research group has an interest in several of these areas. We have one team that’s working on the governance of AI and ethical issues, another that is doing this technical AI alignment work, and also some non-AI areas, we are interested in biosecurity and some other things as well. Myself, right now, I’m actually thinking about the moral and political standards of digital minds. Most of us think that animals have a degree of moral status like it’s wrong to kick a dog. Why? Because it hurts the dog, right? So similarly if you start to get artificial intelligence, digital minds with similar kinds of capabilities and mental attributes as animals maybe they too deserve some degree of moral status, and then the more sophisticated they become maybe those levels of protection need to increase. This is still very much a neglected topic, but at some point, we will still need to take it more seriously. Not just how AI can affect us, humans, or what we might do to each other with AI, but also how we affect AI itself.

We Have to Develop Scalable Methods for AI Control so it Remains Aligned With Human Values - Prof.

Sputnik: Talking about security for artificial intelligence, what specific methods for countering artificial intelligence are being developed? What are the most promising?

Nick Bostrom: I think this is a longer-term problem but there is now research being done on this in a number of different places. My institute is doing joint research with Deep Mind, Open AI - it’s an American group that’s doing a lot of work in this, as a number of universities. So there are a number of ideas. If I go about this, some focus more on tools for increasing the transparency, so you can better understand and see what is going on inside an AI, understand its representations. There are others that focus on leveraging the AI’s ability to learn, to also learn to build better models of human preferences. So instead of being in a situation where we would have to specify explicitly all the things we care about and all the relevant trade-offs we could instead have an AI that could model our preferences by looking at our behaviour and then figure out what our values are, and a bunch of other technical interesting recent work in relevant areas is happening. But certainly the field hardly existed 5 years ago when I wrote my book and it since started to be an active research field. There’s still room for a lot more growth. If there are extremely talented people with an act for mathematics or computer science to go into this area we could still use some more research talent.

Sputnik: If we talk about the ethical aspect of developing artificial intelligence, in your opinion, how realistic and how necessary is it to create a regulatory framework that globally unifies the security of artificial intelligence?

Nick Bostrom: It’s a difficult question, I mean in the short-term I don’t think there’s a realistic prospect of a global regulatory framework. I think the best we can hope for in the short-term is some maybe overlapping consensus on the basic ethical principles that we should take into account, and there has been some work on that. Then over time, perhaps, something more concrete may grow out of that, but I think you have to start at the most general level, we are thinking about what the ideals, the values are that we can agree on, and then that can be transformed into more tangible agreements, into policy and regulation down the road. I think also it would be premature to do that in most areas with AI today, especially the parts with basic research because we don’t yet understand what the problems are well enough that you would want to heavy-handed regulation, down the road, I think there would be more of a role for that.

Discuss