Wednesday, September 19, 2018
On March 18, 2018, a self-driving vehicle in Tempe, Arizona, was involved in a fatal crash that resulted in the death of a pedestrian crossing the street at night. As a result, tests on self-driving cars by the company were suspended in four major cities and the inevitable questions arose: Should human “drivers” be responsible for their autonomous hosts? How do we train self-driving cars to perform risk analysis in real time? Ultimately, are travelers safer with autonomous vehicles on the road?
Transcript:
Ayanna Howard: On March 18 of this year, a self-driving vehicle in Arizona was involved in a fatal crash that resulted in the death of a pedestrian crossing the street. Preliminary reports showed that the vehicle's automatic emergency braking system had been disabled in order to prevent erratic driving. While the sensors spotted the woman crossing the road, neither the robot system nor the human backup driver were warned and able to intervene in time. As a result, the company suspended its tests on self-driving cars at the time in four major cities, and the inevitable questions from the public began to arise.
Should human drivers be responsible for their autonomous hosts? How do we train self-driving cars to perform analysis of the risk in real time? Ultimately, are travelers safer with autonomous vehicles on the roads?
(Instrumental)
I'm the School of Interactive Computing Chair Ayanna Howard, and this is the Interaction Hour.
(Instrumental stops)
Here to help us answer those questions in the School of Interactive Computing is Professor Ron Arkin. An expert in robotics and robot ethics, among other areas, Arkin has over 230 technical publications, has published three books, titled Behavior Based Robotics, Robot Colonies, and Governing Lethal Behavior in Autonomous Robots. He served on the Board of Governors of the IEEE Society on Social Implications of Technology, and he will participate on a panel on ethics in artificial intelligence in front of media and Congress on September 25 in Washington D.C.
Thanks for joining us.
Ron Arkin: Thank you, Ayanna.
Ayanna Howard: So, let's start with the obvious question: Why do we even need self-driving cars? I mean, haven't we survived on our roadways for years without them?
Ron Arkin: Some of us have survived on our roadways for years without them, but unfortunately many people have died. I drive into work and I pass under the sign that counts the number of fatalities that Georgia registers per year, and you can see that going up over the span of the time from January all the way through December, reaching close to 1,000. I think the last time I saw was 470 or something like that for this year alone.
A lot can be done to improve safety on the roadway, and it's believed by many that autonomous self-driving cars can help provide that support.
Ayanna Howard: Okay, so, if we look at this incident that happened, there's not as many self-driving cars on the road as human-driven cars. So, won't those fatalities be about the same?
Ron Arkin: The intent is no. Partly, that's due to the absence of factors that human beings bring to the road, such as road rage, such as distracted driving. All kinds of things that people do -- drunk driving, for example, as well. Autonomous vehicles are completely immune to doing. That doesn't mean they're perfect, and they will still be fatalities on the roadway. But the hope is that they will be far, far less than they currently are.
Ayanna Howard: Okay, so there's less fatalities, which is a good thing. So, why did this happen? What was wrong with the system in terms of a fatality. There's been a couple of others, with other vehicles, as well.
Ron Arkin: The specific incident you were talking about earlier, yeah.
Ayanna Howard: What could have been done -- after the fact, of course, but what could have been done?
Ron Arkin: I have my own opinions, which I'll share with you. The science is one thing, and indeed artificial intelligence is evolving as we speak, getting better and better at many different things. But, as you may know, there is a rush to push things out into the marketplace. To be the first one there commands major market share and great respect even if the system is not perfect. My biggest concern is that we will start using these systems prior to their primetime readiness, and if we do that, there is more risk associated with their use.
And, as I said, they will end up killing people on the roadway. There's no doubt about that. The real question is do they kill far fewer people on the roadway than existing human drivers do? As a ethical consequentialists -- in other words, I'm concerned with outcomes -- this is potentially a very good thing.
Ayanna Howard: One of the things there are some ethics I understood about self-driving cars. But do people really understand what that is? What is the ethics of a self-driving car?
Ron Arkin: There are many decisions that have to be made that have ethical consequences in the design and implementation and use of a self-driving car. There are classic problems drawn from, like, ethics 101 classes that you might see at Georgia Tech and other campuses, such as what's called the Trolley Problem, where a car has to make a decision as to whether you will let the driver die or plow into some pedestrians that are on the sidewalk. How many pedestrians is the right number? What if there are children within the car? There's all kinds of variants on this particular problem that remain unsolved at this point in time. But, nonetheless, people have to make decisions as to what the car is going to do.
Another question, which is even more concerning for me, although not directly dealing with the fatality issues, is the use of social norms versus legal requirements. We pass laws on the speed limits that we use. You should come to a full stop. All these sorts of things are supposed to be obeyed by everybody at all times or you will be ticketed and punished for doing that. Most folks, and I hate to admit, including myself, don't necessarily obey all those laws all the time to the letter. As such, should we hold autonomous cars, self-driving cars, to the same standards as we hold human beings?
There's a law in Georgia that is referred to as the slowpoke law. If you're on a freeway and you're actually traveling the speed limit in the fast lane, you can be ticketed for going the speed limit as opposed to going -- or below the speed limit -- because you are potentially creating a road hazard due to, again, forcing people to drive around you, increasing road rage, and people are ticketed in this state by virtue of that particular law. Does it make sense to enforce social norms? We could right now create cars that will issue a ticket for you automatically if you go faster than the speed limit. My iPhone tells me the correct speed for each freeway that I'm on, and it can deduce the speed that I'm going at. So, why doesn't it just issue a ticket each time I go two miles above the speed limit? That's technically feasible at this point in time. Do you want that in your cars?
What's the right answer? That's the real question, and that's what ethicists have to address. And then, of course, the question is when things go wrong, whether it's a ticket or whether it's a fatality, the question is who is responsible for that particular action? Is it the designer of the car? Is it the driver of the car? Who? Who? The people that programmed it? Is it the ethicist that made the decision as to what the right choice was at that particular point in time? Or, which is what I'm afraid of, are we going to leave it all up to the lawyers to decide? And lawsuits will ultimately determine what's right and what's wrong. To me, that's not the best way to proceed, but unfortunately, as this technology is rushing out onto the roadways, that may be what actually happens.
Ayanna Howard: So, you're saying that there are exceptions to rules, which we kind of -- that's our human nature. And yet, these self-driving cars -- we haven't yet decided if they should follow the rules or follow our social rules. So, the question then is -- there are self-driving cars. I know in my travels I've seen them on the road. These issues have not been solved yet, so what are we doing?
Ron Arkin: That's a very good question. What are we doing? The example of a self-driving car following the rules causing an accident happened with a Google car, which came to a complete and full stop at a stop sign, and the car behind it, which was driven by a human driver, rear-ended that particular car. So, the answer is, who is responsible under those circumstances? The car that obeyed the law? Well, the default is, yes. That person is not responsible, but the person who did not come to a full stop -- but that accident could have been avoided. That's the most important thing.
Ayanna Howard: And it could have resulted in a fatality, even though the rules may have been followed by the self-driving car.
Ron Arkin: And there's deeper questions too. The issue is -- imagine a four-way stop sign at a road. We use human cues quite often. Sometimes you're waved on by the other driver as to tell you who should go next --
Ayanna Howard: Or flicked off.
Ron Arkin: -- or flicked off, or what have you. You look at the person's eyes if you can see them or the way that they are looking, and that tells you, okay, it's my turn. That isn't going to happen with self-driving cars. One solution that actually was implemented -- although, I don't know how effective it is -- is that if the self-driving car wants to yield at a situation such as that, it throws itself into reverse and indicated, okay, I'm going backwards, so I guess that's my cue to go forwards.
Ayanna Howard: (Laughing) Almost like a dancing car.
Ron Arkin: (Laughing) Pretty much, that's pretty much --
Ayanna Howard: I don't know. That might freak me out a little bit.
Ron Arkin: Well, the point is that we get accustomed to all this stuff, as well. Imagine the days at Hartsfield Airport, one of the earliest airports that had trains with nobody onboard to drive it. Nobody pays any attention to that right now. Everybody wants to be up in the front car looking through the front window where there's nobody driving the car to get the best view.
Ayanna Howard: That's true.
Ron Arkin: And we accept that.
Ayanna Howard: Do you think, then, in say maybe 10 or 15 years, we're going to be looking back at this conversation and we'll be like, "what were we talking about? Look, everything's perfect."
Ron Arkin: Perfect? No. Better? Maybe. And hopefully it will, indeed, be the case. This happens all the time. I mean, think of the days when there were elevator operators. When I was young, a good number of years back, there were people that would actually control the elevator and they would take you to a floor and that was the default. Now we have elevators that are smarter, often, than the people that are riding them --
Ayanna Howard: (Laughing) Now, now...
Ron Arkin: (Laughing) That's true. They get us to that particular point. They are pretty smart, elevators, in terms of controlling the elevators. Narrow domain knowledge is what I'm referring to here. Smarter in that particular sense. And it would be disconcerting to actually see a human being controlling the elevator under those circumstances.
Ayanna Howard: That's true.
Ron Arkin: A variety of different sets of circumstances. Artificial intelligence historically has stopped being called artificial intelligence when it fades into the background and no one notices it anymore. You see that in your washing machines, you see that in your current cars and your iPhones and all these other things. We don't recognize the advances and impact that AI has had because it's not new anymore. We've accepted it. And that's common with almost all technologies.
Ayanna Howard: So, there's a good to this.
Ron Arkin: Oh, yeah. There's a potential good to it. Getting there is the hard part. The time where -- there's some people that say that a child born today will never drive a car. And maybe that's a good thing. Because the hard part is when we have mixed drivers, when we have humans that might want to test -- oh, you said you saw a car on the road. Well, maybe you wanted to swerve into it to see how would it respond? Well, somebody might have that particular idea.
Ayanna Howard: Yeah, never me.
Ron Arkin: Never you. What was I thinking? (Laughing) But someone else. Someone might be curious about how these systems actually work. And it may not be capable of handling those sets of circumstances. Human beings can be curious, but they can also be malicious, as you may know. And, as such, while these self-driving cars are sharing the road with humans, that could be a potential transition problem.
Ayanna Howard: Okay, so, where right now we have self-driving cars, but there is a human operator that's supposed to be paying attention --
Ron Arkin: Yeah.
Ayanna Howard: -- and then we have a future where, probably, there are all just self-driving cars and we have this transition point. What do we need to do? I mean, because this is going to be a problem. So, I'm going to wake up in, say, two years and I'm going to have to still deal with this mixed field of human-driven and robot-driven cars. What do we need to do? What power do I have?
Ron Arkin: Well, those are two separate questions. What we need to do is be cautious as we move forward. As I mentioned, the technology needs to be effectively tested. It worries me when certain companies say, "You're a beta tester out there. Here's our software, you sign on, you're beta testing this, and so you're responsible for the consequences of this particular action." And you're just a purchaser of that particular car. That's dangerous to me. There are techniques and progress needs to be made in verification and validation to ensure that these systems perform appropriately and adequately within their system boundaries as defined. And we need to define what those are. These cars may work perfectly well on an average day, but in snow or rain or, what's the postal service -- whatever it is -- under all those bad circumstances, the car may potentially fail catastrophically. We need to be careful as we move forward and find ways to regulate -- and this is where what you can do, where your power may lie -- is to talk to your congressman or whomever to ensure that appropriate regulations are set up to prevent technology being moved out too fast.
Otherwise, it will be the classic case of putting up a stop sign after the accident. We'll change things after people die, and we need to do it before.
Ayanna Howard: So, I hear AI, artificial intelligence and robotics, self-driving cars is good. But we have to be careful, we have to be cautious and we do have power by ensuring that there's some type of rules and regulations, primarily on companies.
Ron Arkin: Hear, hear.
Ayanna Howard: Sounds good. So that closes our conversation with Professor Ron Arkin from the School of Interactive Computing, chatting with us today. Lovely, lovely conversation about the potential benefits and perils of autonomous driving systems.
We appreciate Ron for joining us. We had so much fun just driving -- oops, I meant diving -- into AI ethics that we will have him back for another conversation where we will be talking about deeper waters on the killer robot problem. You don't want to miss that one.
(Instrumental)
If you like our show, be sure to subscribe and follow the school on Twitter and Facebook at @ICatGT.