Thursday, September 20, 2018
The emergence of artificial intelligence in society has elicited visceral reactions from people the world over, many of whom, thanks to portrayals in popular culture, can’t quite decide whether they believe we are building the future – or destroying it. Are we actually dealing with “killer robots?” Why has the public perception become so polarizing? Can we trust algorithms to make appropriate and trustworthy decisions, or do we risk too much by turning power over to the robots? Professor Ron Arkin, an expert in robotics and roboethics joins the podcast to discuss.
Ayanna Howard: The emergence of artificial intelligence in society has elicited visceral reactions from people the world over, many of whom, thanks to portrayals in popular culture, can’t quite decide whether they believe we are building the future or destroying it. While we feel relatively certain the killer robots of the Terminator universe aren’t coming to get us – at least, anytime soon – there are distinct ethical uncertainties we must address now before empowering these autonomous systems to carry out the life-and-death scenarios of military operations.
That’s an emphasis of the research of School of Interactive Computing Professor Ron Arkin, who will help us unpack some pretty heavy questions. Are we actually dealing with “killer robots?” What is the public perception about them, and why? Can we trust machine learning algorithms to make appropriate decisions in real time? And how much power should we turn over to these robots?
I’m the School of Interactive Computing Chair Ayanna Howard, and this is the Interaction Hour.
We appreciate Professor Ron Arkin joining us again for a deeper discussion into ethics in robotics. It’s just a sneak preview of the panel he and I will join in front of national media and Congress on September 25 in Washington D.C.
Thanks again for joining us.
Ron Arkin: My pleasure, Ayanna.
Ayanna Howard: Alright, so let’s get it out of the way: killer robots, are they really coming to get us?
Ron Arkin: Not to get us. But is the technology actually being developed that can be used in the battle field to win wars or to reduce not-combatant casualties? The answer is “yes” to that particular aspect of “are they coming.” But let’s also not use the term “killer robots” if we can avoid using that. “Killer robots” actually came out of a paper by a colleague of mine from Australia, when he started talking about killer robots. His name was Robert Sparrow. It was latched onto by the press and many non-governmental organizations, or NGOs, as I’ll refer to in the future of this discussion. The United Nations, in its discussions on this particular topic in Geneva, has referred to them as lethal autonomous weapons systems, or LAWs for short.
To me, that’s a more appropriate term to use because it doesn’t bring up, as you said, the visceral response or the horrors of the science fiction movies that we’ve seen in the past. And it enables us as people discussing ethics to consider them logically and, in some ways, dispassionately about what is right and what is wrong with this particular type of technology.
Ayanna Howard: So, killer robots, your saying, one, it sounds like fearful and “Ah! What’s going on?” But at the end of the day, it’s about our fear. So, what is this fear we have that robots, even the lethal autonomous weapons systems, which is basically a gentle version of saying “robots with guns – killer robots,” but why does this make us terrified?
Ron Arkin: Well, one could go back to the actual origin of the word “robot,” which was from Karel Čapek. He wrote a play called Rossum’s Universal Robots in 1920 or something around then. Robot came from the word “slave,” or worker. That play, although the robots weren’t armed with weapons, didn’t turn out well for humanity. Spoiler alert, if you ever want to see the play, which I don’t recommend you watching actually, if you ever want to understand the history of robots, everybody dies at the end. They kill everybody. And this is what our western culture, from Frankenstein and other things, have learned to expect from these artificial creatures that we’re creating. I love science fiction as much as the next person, perhaps even more, but we also have to recognize that it is fiction based on science, and we have to be careful about how we move forward.
We have to understand the risks associated with things. But there are potential benefits from the use of different forms of technology, including robots. And there are risks. What the world is deciding right now through the United Nations and other aspects of the discussion that’s going on as to what should we do? What is the right thing to do? As you might imagine, just as there is for abortion or capital punishment or any major issue, there is a diversity of opinions. Sooner or later, someone’s going to have to decide whether we ban this technology, whether we regulate this technology or whether we just be laissez faire and let what happen what happens. Hopefully, not the latter.
Ayanna Howard: Okay, so, let’s talk about this. We’re trying to figure out what we should do with, we’ll call lethal weapons – lethal autonomy. So, we’ll use lethal autonomy instead of killer robots for the duration. So, what are the ethics of this lethal autonomy? I mean, we’ve had wars and people have been fighting each other for eons and eons. So, what are the ethics of what we’re doing when we’re thinking about robots in this space?
Ron Arkin: Well, the ethical discussions surrounding warfare have been going on for thousands and thousands of years with codification since the 1800s in the Geneva Conventions and Hague Convention, as to the oxymoronic statement of what is the legal way to kill each other in the battle field? What is the appropriate way? One, a pacifist would argue, and I encourage pacifists to argue for this, as well, too, that killing each other is wrong period. And I’m sympathetic to that particular point of view. But, since time immemorial, going back to the very beginnings of history, we can see that human beings have engaged in warfare in a variety of different ways, engaging in genocide and homicide, all sorts of horrific – infanticide – all sorts of things throughout history.
Around the last couple of hundred years, we have started to put international law, as I mentioned it’s referred to IHL or international humanitarian law, as a means to regulate the right ways to conduct warfare, potentially with minimization of damage to non-combatants and non-combatant properties. That’s where my particular concern comes in. I am deeply concerned with how we can find ways to reduce the slaughter of innocents in the battle space, how we can do that effectively using technology. And by using machines that could potential outperform human beings with respect to enforcing international humanitarian law, I argue that that’s one way in which we should continue to explore the use of this technology in current warfare scenarios.
Ayanna Howard: So, basically, you’re saying robots could enable a humane war. Maybe an oxymoron, but –
Ron Arkin: Well, one could say “humane war,” but that is an oxymoron. I agree with you on that, but it could potentially produce better outcomes than we are currently getting with existing human warfighters, who are subject to all kinds of stresses and strains. Even more now due to the tempo of the battlefield increasing more than it ever has due to technology. How can we reduce the anger and fear or eliminate the anger and fear and frustration associated with human warfighters in the battle space? How can we ensure that they don’t suffer from certain cognitive problems, like scenario fulfillment which led to the downing of an Iranian airliner many years ago due to problems of operator perspective in these kinds of systems.
There are all kinds of things that go wrong, even just carelessness and accidental problems let alone the atrocities that human warfighters commit. Could we do better? And I’m not saying that we necessarily could do better, but I am saying that it is an important and valid research question that we have to explore because the stakes are so high. The slaughter of innocent combatants goes on daily in the world in the battle space. And people are not paying adequate attention to it. And technology can, must, and should make a difference in the ways in which we engage each other in warfare to minimize civilian casualties.
If the discussion leads to the point where those argue that we should ultimately ban lethal autonomous weapons in the battlefield, I’ve always said that I’m not aversive to it. But what I will say is that doesn’t mean that we should ignore the problems that exist within the battle space. More and more people need to look at ways to reduce civilian casualties. I would just charge those that say that we should just ban this technology to say that we need to find ways to reduce civilian casualties and tell me what you’re going to do.
Ayanna Howard: So, basically the social aspects of war and the emotional reactions that soldiers might have, we can mitigate a little bit. So, if I think about banning, that’s like the extreme, versus really think about the technology and designing it such that it really takes into consideration the pros and cons of any decision, how do we ensure then that our machines can do that?
Typically, if you think about it, we are using probably our soldiers and our experts to train these machines based on, I guess, conventions of war and things like that, how do we make sure that they’re not biased in anyway?
Ron Arkin: Bias is one aspect, certainly, as well. But let me back up just a little bit to give you a broader perspective. It should be recognized that these systems will never, in my mind, fully replace human warfighters in the battle space. The key factor is using the strengths of both technologies – autonomous systems and human decision making – as appropriate. Keeping humans in the decision process at an appropriate level is important, where they can make rational and appropriate decisions. Where they have difficulty is where these systems should be used.
There’s something referred to as bounded morality, which means that we’re not trying to encode the entire Geneva Conventions and all of human moral reasoning into these platforms. That’s not possible in the near to mid future, I would contend. But we can put within these systems the morality that is required for very localized situational awareness in certain circumstances such as a building-clearing operation or a counter-sniper operation or operations in a demilitarized zone, which don’t require everything that needs a human being would bring to bear on those higher-level decisions.
But we can train them in the same sorts of ways that we train humans. The worst thing that you would do is give a human soldier a gun and say go out in the battlefield and figure out what’s right and what’s wrong. We give them instruction. We formally teach them what is the Geneva Conventions and what you’re allowed to use. We provide them with rules of engagement that say that these are acceptable ways to use force and these are unacceptable ways to use lethal force. And robots can take those same rules and laws in these narrow-bounded sets of circumstances if we get it right – and that’s still a big if – and potentially create better outcomes. That’s the point I’m trying to make.
Ayanna Howard: So, we can teach robots basically how we try to teach humans to be good.
Ron Arkin: We can use the same sets of rules. We won’t quite teach them the same way, because we won’t put them in a classroom, so to speak, and learn them and make them take an exam. But hopefully we can have proficient programmers who can find ways to embed these rules within their programming and to test and verify that they are, indeed, under certain circumstances, behaving appropriately even more so than human beings. And it’s my contention that if we can’t outperform human warfighters with respect to ethical compliance, these systems should not be fielded.
Ayanna Howard: So, who is part of this conversation? Is it you, and you’re standing up and you represent all roboticists? Is it you against the world? Or, are there roboticists and computer scientists as well as philosophers as well as lawyers – I mean, is this a unique community of everyone contributing to the conversation?
Ron Arkin: Everyone is contributing to the conversation in this, which is good. If you would have asked me 15 years ago when the conversation first started, there were a few lone voices actually as harbingers of the threat that was coming. Some of my colleagues and I had debates early on with respect to these particular issues which has helped to raise consciousness of the problems and the potential benefits that these systems can have. But it has expanded, clearly, into philosophy. Philosophers are ethicists, and it’s crucial to have them into the discussion. As are theologians, as are military people, the users of this technology, and average people, as well.
The ways in which we go forward require discussion. You do not want to leave decisions up to roboticists to make this. You want to make sure that the discussion is broad. Policy-makers have to be brought in. And the good news is that those discussions are being had. Whether it will result in something useful remains to be seen, but at least we are talking about it now.
Ayanna Howard: So, here’s a question for you. A lot of these are about war and about lethal weapons. I’m going to turn it more to the domestic space. Because, of course, that’s going to be the next step. Should we arm police drones, for example? What are your thoughts about that?
Ron Arkin: I am currently, until I learn a better reason otherwise, against using this technology in domestic settings. It’s just a different kind of operation when you’re using it against your own citizens. There’s Posse Comitatus and other aspects, as well, about using military in domestic circumstances. But we already don’t use flamethrowers, and we don’t use tanks and bazookas. Some technology is appropriate to import. But it’s also the other way around. For example, tear gas is perfectly acceptable in domestic settings, and it is a crime if you use it in war settings. You’re not allowed to use gas warfare.
There are just different sets of rules. Even the military has something called the rules of the use of force instead of rules of engagement under domestic types of situations. But you do have to be concerned about what’s called civilian blowback, where this technology may find itself into domestic hands. Maybe not our country, but maybe other countries, as well, and used against civilian populations. That’s one of the major points that opponents to the technology bring up.
Ayanna Howard: Because there may be a fear that it will come back home.
Ron Arkin: Yeah.
Ayanna Howard: I think that probably frightens a lot of us. I can see some good, because then you don’t have the emotional aspects of peacekeepers in the domestic space. So, what can I do? Maybe I’m pro, maybe I’m con, maybe I think that this is the best thing, or maybe I’m just like, ‘Eh, I’m not quite sure.’ So, what can I do?
Ron Arkin: There’s so many opportunities now to get engaged in the discussion in so many different ways. Of course, again, get educated is the first thing. Try and read about what the issues are associated with this. Do not read newspaper headlines, I guess, is the best thing. I didn’t say don’t read the newspaper article, but the headlines are always there to serve as clickbait, basically, to get you to look at the article in the first place.
But, then, have meetings. Whether it’s with your book reading club or whether it’s a church group or whether it’s a political group or taking classes or speaking to your congressman or, depending on where you are in the geopolitical power structure, engage. Get educated, and then engage. And make recommendations. It’s up to everybody to engage in this particular discussion. I have always said I tend not to be prescriptive. I don’t want to enforce my own personal beliefs on others as to what is right and wrong, but I do want people to make up their own minds. We need to find ways to be able to have that happen. There are many venues. The White House has had discussions. There are meetings, I’m traveling all around the world talking about this, as well, many other people are doing the same. Get engaged. If you are opposed to it, there’s a campaign against killer robots. It’s a collection of NGOs – non-governmental organizations, including Human Rights Watch is one of the leaders. You can get involved with that and hear their point of view. There’s a lot of literature on the United Nations webpage, the United Nations Geneva webpage, to read up and see the different perspectives and even follow the discussions.
But don’t let this happen without your voice. That’s the important thing.
Ayanna Howard: So, get involved.
Ron Arkin: Exactly.
Ayanna Howard: Well, thank you for this conversation. So, it’s a tenuous time when you consider the importance of AI and ethics discussions. Right now, technology is advancing at a pace that is so difficult for researchers, philosophers, policy experts, even, to keep up with. The advice, as Professor Ron Arkin says, is to get involved. Become engaged in the conversation. Whatever your opinions are, have a voice in the conversation.
I would like to thank Professor Ron Arkin again for joining us, and don’t forget to keep up with Ron and myself as we join other roboticists and robot ethics experts on Sept. 25 in Washington D.C.
Make sure you subscribe to our show and, as always, follow the School of Interactive Computing on Twitter and Facebook @ICatGT.