Future Now
The IFTF Blog
Kill Decision
Maria Konnikova and Kevin Kelly discuss how robot violence could make the world a better place
South Korea has robot snipers at the North Korean border that can recognize and kill a person from a mile away. They can shoot them dead in their tracks. The sniper-bot factory makes the robot with the ability to kill what it sees. But at the request of the South Korean military, the manufacturer modified it so a human must enter a password to give the robot the go-ahead to shoot. From a technology perspective, a killer robot doesn’t need a human in the loop anymore. What is the role of humans in the robot battleground? We asked two of our favorite thinkers to discuss the issue. Kevin Kelly is the co-founder of Wired magazine, and the author of several books, including the New York Times and the Wall Street Journal bestseller, The Inevitable. Maria Konnikova is the author of two New York Times bestsellers, The Confidence Game and Mastermind: How to Think Like Sherlock Holmes. She is a contributing writer for The New Yorker, where she writes a regular column with a focus on psychology and culture.
Mark Frauenfelder: Do you think it would ever be a good idea to take humans completely out of the decision-making loop when it comes to autonomous combat robots?
Kevin Kelly: I actually do. It will help us get rid of some of our assumptions about conflict, and killing people is a way to do it. The process of delegating these jobs to robo-soldiers may, in the short term, be horrific, but in the long term, will make it clear how stupid, and basically immoral it is to kill, and it will be harder and harder for us to justify anybody doing it. This is one of the ways in which artificial intelligence (AI) and robots will make us better humans.
Maria Konnikova: I do think that’s a very good point. My initial reaction was that no, taking humans out of the equation is not a great idea, because we’ve seen, over and over, that our ability to create sophisticated technology is outpaced by hackers’ ability to hack into that sophisticated technology and take it over. And you have this happening all over the world. The second that we invent something, someone will figure out how to break into it. And so, if you remove the humans that are actually supposed to be in control, you still have the possibility of sabotage and a lot of different things going wrong, and you don’t have anyone who can actually override it. Maybe I’m being alarmist, but I’m never comfortable completely delegating to technology, just because I’ve spent years looking at how people can take advantage of technology, and do take advantage of it.
If someone can program robo-soldiers to be ethical, someone can hack them and program them to be “ethical in a different way,” that to us would be very unethical. What happens if that’s the model that ends up winning? I realize that this does sound dystopian, but there are lots of examples in history, and part of it comes from the fact that I did grow up in the Soviet Union when it was still the Soviet Union. So I always have a mindset that things can get bad a lot faster than we realize.
Kelly: Yeah, so the only alternative to this, is to say, “No, no, no. We humans want to reserve the right to kill each other.” We’re not very good at it, by the way, because we will miss. We’re not very precise. We’re very emotional. Machines aren’t going to be accused of war crimes, as long as they’re not programmed into ... they may be hacked into being that way, but in terms of legitimate things, they’ll be very rational. They can be very precise. In fact, they may be able to hurt someone without killing them. So, they can probably, in the long term—again in their kind of legitimate, un-hacked version—be better at it. But we’re saying, “No, no, no. We kind of want to do that ourselves.”
Konnikova: I love your point that, eventually, it will be difficult to justify any kind of killing, but I’m genuinely curious about the interim, because we already have these debates about drone warfare, and how that, because it’s not human, you’ve abdicated your responsibility—even though humans are piloting them—because of the factor of removal? Are you worried at all, that in the medium term rather than the long term, we’re actually going to see more violence, because it will make it more removed?
Kelly: Right, like drone warfare, where the guys are in air-conditioned containers, in Arizona, and they’re killing people in Africa, and it’s not face-to-face.
Konnikova: Right, and if you have these robot soldiers, especially if they don’t even have human control, then that makes that problem a lot worse.
Kelly: Superpowers, like the U.S., maybe China, Russia, will tend towards these things, because the casualty rates are low, but like a lot of things, there’s a double standard, because you know, if these armies were coming here killing people, we’d be very unhappy about it.
So, yes, in the short term, whether we’ll see more violence, I don’t know. I buy Steven Pinker’s argument, that in general, the technology and the globalization of the world has decreased violence overall, and that trend will continue, so I don’t think that having robo-soldiers will suddenly change that overall trend.
If you have robot soldiers coming to kill you, people are not going to be happy. It causes a debate. It says, “Is this what you want? Why do we have machines killing?” And then you have to say, “Well, why do we have humans killing? Why is it better for a human to kill than for a robot to kill? How does that make it better?”
So, while there will be plenty of turmoil, and conflict, and short-term massacres, in the long term, this will actually continue to decrease violence in the world.
Konnikova: Even if it falls into the wrong hands? Like Putin or Trump having the technology, and getting mad, and deciding, “Let’s do this,” and now you don’t have human oversight.
Kelly: It could happen once or twice.
Konnikova: But is that not enough?
Kelly: Enough for what?
Konnikova: For mass destruction.
Kelly: Yeah, it’s like we had two atom bombs, and that seemed to be enough. Actually, I’m of the view that both of those were unnecessary, that we would have won the war anyway, but we didn’t continue doing it, because everyone realized that it was out of hand. So we could certainly have some kind of the first use of this being a massacre, but it also being a lesson, if it was done wrongly. But I don’t think that it necessarily has to be that way. We could train robo-soldiers to be ethical, and moral, and better than us, so that’s the difference. It’s like, you’re much more likely to have this disaster by kind of outlawing them.
Konnikova: I hope, I really hope, that forcing more people to consider these issues will get us to a good place. I’m not 100 percent optimistic, just because I’m not an optimistic person, but that’s probably clear.
Kelly: Well, you’re talking to one of the most optimistic people on the planet.
FUTURE NOW—Reconfiguring Reality
This third volume of Future Now, IFTF's print magazine powered by our Future 50 Partnership, is a maker's guide to the Internet of Actions. Use this issue with its companion map and card game to anticipate possibilities, create opportunities, ward off challenges, and begin acting to reconfigure reality today.
About IFTF's Future 50 Partnership
Every successful strategy begins with an insight about the future and every organization needs the capacity to anticipate the future. The Future 50 is a side-by-side relationship with Institute for the Future: a partnership focused on strategic foresight on a ten-year time horizon. With 50 years of futures research in society, technology, health, the economy, and the environment, we have the perspectives, signals, and tools to make sense of the emerging future.
For More Information
For more information on IFTF's Future 50 Partnership and Tech Futures Lab, contact:
Sean Ness | [email protected] | 650.233.9517