Ethics, Law, and Society Forum – October 23, 2018 – Ryan Jenkins

[ Music ]>>Okay. So welcome again
to this week’s installment of the center for ethics, law,
and technology speaker series, [inaudible] Philosophy 205. And this week, we’re
going to hear — We’re excited to bring up
Ryan Jenkins from Cal Poly. And he has been speaking
there for a number of years, philosophy of technology,
philosophy of science, different philosophy
of warfare class. So we have a really interesting
talk today on autonomous weapons and ethics, which is a
very important subject that we’re going to
have to deal with. If you can’t get enough
of this, today at 5:30, I will be giving a lecture in the [inaudible]
series on a similar topic. So it’s all robot weapons today. So it will be exciting. Ryan works in the, was it –>>Ethics and emerging science.>>Ethics and emerging science. I kept wanting to say ethics
and emerging technology. Ethics and emerging
science group at Cal Poly where they’re lucky enough
to have a whole bunch of professors working
on these topics, and they have some stellar and exciting publications
out of this. That’s very similar to the
group that we have here, but ours is more focused
on ethics and law. So very exciting. I hope you can help me in
welcoming Ryan Jenkins.>>Thanks. [ Applause ] Thanks. Thank you, Dr.
Sullins, for having me up here, having me into your
beautiful campus. He is right. The subject of my talk today
is going to be the social and political implications
of autonomous weapons. These are weapons that
are capable of targeting and engaging human targets
without human oversight. So affectionately called
killer robots sometimes. When you hear people
talking about killer robots, this is what they’re
worried about. And the angle that I wanted
to come at this from was from the angle of
engineering ethics. So if you’ve been
paying attention to the technology
press recently, you’ve probably seen
a tremendous amount of controversy aimed at
companies like Google and Amazon, and even Microsoft
got pulled into this somehow, where their employees are
starting to ask what kind of projects are we building and are they morally
justified projects. So Google got into
a lot of trouble for a project called
Project Maven. That was a contract they
took with the Pentagon to help develop an
artificial intelligence system to identify targets in
drone surveillance footage. And a lot of people looked
at this and they said, well, classifying objects and drone
footage is the first step towards being able to tell
between things that are humans and things that are not. And that’s the first
step towards being able to distinguish between things
that are friendly humans and things that are
enemy humans. And you can see why
people were worried. People were worried
that Project Maven, the project that Google
undertook with the Pentagon, put us on the road to
autonomous weapons. So you’ve seen a lot of articles in the popular press examining
this emerging controversy. And then you’ve seen responses from people, the
Washington Post. Silicon Valley should stop
ostracizing the military. And just about a week ago,
Amazon Bezos, I mean Jeff Bezos, the CEO of Amazon, came out
firing against Google and saying that Google and their employees
are mistaken for getting upset about working with the Pentagon. Bezos said America
is a great country and it needs to be defended. What I want to do in
this talk is I want to raise some questions that engineers might
find morally relevant when they’re considering
whether to take on projects for the military,
like Project Maven and like autonomous weapons. Some of you in this
room might be in this position
someday very soon. You might be in the
position where you’re asked by your employer or
your boss to take on a project working
for the Pentagon. And the question
that I want to put to you is, what will you say. And I want to help
you think about this by giving you some
questions to consider, a framework to consider, and
by illustrating some things that you might not
have thought about. My motivating quote
comes Langdon Winner, who’s probably my favorite
philosopher of technology. He says, there’s a tendency in
our engineering professions not to see the kind of work
they do as involved with serious moral questions. And he says, convinced that
technology is merely neutral and that somehow the important
questions will be raised by someone else. Many engineers seem unwilling
to talk about ethical concerns. Now he goes on to say that
this is not a feature solely of our engineering
professions, but that in a lot of different sectors in
society, we’ve started to see ourselves less and less
as individual moral agents and citizens of a free society. And instead, we start to
see ourselves as these parts of large corporate bodies that gradually eliminate the
individual’s need to think. This is a very stirring and
a very challenging quote. And what Winner is saying here
and what I want to pick up on is that we need to recapture
the sense of ourselves as moral agents and
citizens of a free society, rather than people who
will do what they’re told and then be content to think
the important questions will be raised by somebody else. Now I have just one claim
that I’m not going to defend. The claim that I’m
not going to defend is that a good engineer considers
the possible consequences of their actions. I’m not going to defend
this because I think that every engineer
already accepts this. I think that to deny this
would be pretty obviously false that you can be a good
engineer without thinking at all about the consequences
of your actions. And I also think that it
would be irresponsible. Now the difficulty, of
course, is determining which consequences engineers are
responsible for anticipating. So what I want to do is sketch
some reasons why military contracts, contracts with
the Pentagon might be particularly problematic. And when we’re thinking about
the consequences that we have to consider, I want to take
this quote from Lewis Mumford as being illustrative. Mumford says — My own students
back at Cal Poly get sick of this quote because I
find a reason to bring it up nearly every class. What Mumford says is, the
machine can’t be divorced from its larger social
pattern, for it’s this pattern that gives it meaning
and purpose. So what’s the larger social
pattern that we’re considering, and what kinds of reasons
does it give engineers when they’re thinking about
the ethics of what they do and whether ultimately
to take on something like autonomous weapons? Now in that quote from
Langdon Winner, the first thing that he laments is that
there’s a widespread view that technology is
merely neutral. This view is very popular
in Western civilization, and it’s very popular outside
of a philosophy classroom. But in a philosophy of
technology classroom, this is probably the first view
that will come under attack because it’s been viciously
assailed for hundreds of years, or at least a solid
150 years or so, by philosophers of technology. And they’ve pointed
out many ways in which technologies are
not just neutral tools, but they can affect the
nature of an activity and they can affect the way
that we pursue our goals. So here’s what I mean. Rather than merely
getting us from A to B, instead using a tool
can change the nature of the B that we accomplish. Let me give you a rather
mundane example of this. Let’s start with a can opener. Can opener seems
like a neutral tool. Doesn’t have any moral
qualities, right. We can’t judge it as
morally good or morally bad. It just sits there on the table. It doesn’t do anything
by itself. What does it help us do? It helps us get food, okay. Fair enough. Very easy to tell a story about how the can
opener is just a tool. But when we dig into it
and think a little bit more about the way that a
can opener encourages us to accomplish a task,
what we see is that it can actually push us
in the various directions, that it can endorse certain
ways of accomplishing this task and rule out other ways. So let me show how that is. A can opener isn’t just a
tool that helps us get food, but when we use a can opener, it
changes the kind of food we get. It endorses certain
kinds of food, and it eliminates other
kinds of food from our diet. What are some examples of this? Think of the kind of food you
get when you use a can opener. It endorses longevity,
preservation, right. You’re opening a can. It endorses foods that have been
preserved, things that have come from far away, things
that can come from the southern
hemisphere, right. So we’ve obliterated
the connection between our diet
and the seasons. And it endorses mass production. These are the kinds of values
that we find in a society where can openers are common. Notice this is not the only way that we might have a
relationship toward food. Think of the opposite
of each of these goals. So instead of longevity
and preservation, you can imagine experiences
that are ephemeral or temporary. Rather than things that
come from far away, think about the farm
to table movement. Think about the local
food movement. And rather than things that
have been mass-produced, think about unique or
artisanal experiences with food. A can opener gives us all
the things on the left, and it takes away all
the things on the right. So what do can openers tell
us about autonomous weapons? What do they tell us about
working with the Pentagon? Even for some goals that
seem totally unimpeachable, morally good, the way that we
construct our tools can change the way that we accomplish
those goals. Take a goal that’s
totally admirable that no one would criticize, the
goal of identifying terrorists. When it comes to
military ethics, there’s probably no more
important distinction. There’s no more important
task than distinguishing between these two
kinds of people. The people on the
left, combatants, soldiers, legitimate targets. In military ethics, we say the
people who are liable to harm, the people who have
done something to make themselves
worthy targets of harm. On the right, we have
civilians, noncombatants, illegitimate targets,
take whatever you want. And the most important thing for
a military to do is to be able to discriminate successfully
between the people on the left and the people on the right. Now when it comes to a
traditional state-based army, like the United States Army,
it’s pretty clear to know who the people are who
are legitimate targets. The Red Cross tries
to help us here. The Red Cross says you’re only
permitted to target people who are carrying out or
organizing combat operations, people who are commanding or
carrying out combat operations. Now when the military looks like
this, this is an actual picture of the United States
organizational chart from the Navy in World War II. You see some names here
that you recognize. Douglas Macarthur’s
in there somewhere. It’s pretty easy to know who’s
commanding, who’s carrying out combat operations. Pretty straightforward. Only this is not the
enemy that we face. this is the enemy that we face for the last 18 years
or so at least. This is a map of a
terrorist network. This is an actual
map of a portion of Al Qaeda’s terrorist network. In the middle, the spoke in the
upper left is Mohammed Atta, the “ringleader” of 9/11. These are the terrorists that
organized and carried out 9/11. Now the problem is, when you’re
fighting an enemy that looks like this, when you’re
fighting a nonstate army, it’s very difficult to
know who are the people who are commanding and
carrying out combat operations. If we had a nice, neat
organizational chart, we’d just aim for the
people at the top. But we can’t do that here. What we get is something
messy, something horizontal. We get a network of connections
that’s very complicated. Now the United States has
a way of figuring this out. They use a technique called
social network analysis. And what that lets the
Army do is take a network like a terrorist network
and then find out which of the nodes, which
of the people in that network are most crucial
to the network’s operation. And the idea is this. If we were fighting
a traditional army, we would just decapitate
the army. We’d aim for the
people at the top. If we’re fighting a nonstate
army, though, it’s not easy to tell who’s “at the top.” What we get is this
messy network of interconnected spokes
and hubs and nodes. So instead, we aim for the most
crucial nodes in the network. We aim for the people who
seem to be most important, most integral to the functioning
of that terrorist network. The army uses a technique
called eigenvector centrality. For all the computer
scientists in the room, this might ring some
bells for you. Eigenvector centrality is a
technique whereby nodes are weighted by the number of
connections they have to nodes that themselves have
many connections. That sounds complicated. But it’s actually the same
method that Google used when they designed their
page rank algorithm. So way back in the
early days of Google, when you do a Google
search, it gives you, you know, a billion results. How does Google determine which of these results
are most important? It looks at pages that have a
lot of links from other pages that themselves have
a lot of links. Google’s page rank algorithm
was borrowed from academia. If I’m searching for a paper
that I want to read and it comes up from an author
I’ve never heard of, how do I know whether
this paper is reputable? Well, I can see who
cites the paper. And if the paper
is cited by people who are themselves famous,
then that’s a way of knowing that the paper is reputable. So the paper is cited by people who themselves have
many citations. For Google page rank, the
page is linked to by pages that themselves have many links. This tells you it’s
a reputable page. And the United States
military says, well, let’s do the same
thing with terrorists. People who have many
links from people who themselves share a lot
of links must be important. And then the idea is that by
killing an important node, you would disrupt this
terrorist network. Now what’s wrong with this? Well, an operation that
seems totally objective, gather the data, run the
analysis, see what it says, is actually shot through with
subjective human judgments, judgments like what are these
connections and how do I know if they’re morally important. When I see a link
between two people in that network,
what is that link? Is it a handshake? Is it a phone call? Is it a meeting? Did they exchange a package? Did they travel together
somewhere? All of those things communicate
different kinds of information about the link, the “link”
that these people share. And moreover, eigenvector
centrality is only one measure of the strength of
nodes in network. So we take a process that
seems perfectly objective, and it turns out
that there’s a lot of subjective judgment
built into it. And the subjective judgments
about what counts as a link and how we know which
nodes are central, those human judgments are
going to end up changing who we think the
important members of this terrorist network are. The ultimate concern
is that we end up targeting the wrong people. So remember, the primary task
of a military is to distinguish between people who are
a legitimate target and people who are not. But think about some of the
people that might get wrapped up in a social network analysis. Some of these people
might engage in the work only periodically. They might be people in
mere support functions. So think about someone who’s
going to have a lot of links to other people in the network. They might have a cook, a
cook who’s preparing food. They might have a medic that treats a lot
of different people. You might have a courier
who’s carrying packages from one end of town
to the other. You might have a journalist
who’s conducting interviews with as many members of
the network as they can. But none of these
people have done anything that makes them liable
to be killed. So the risk is that we end
up targeting people based on factors, based on data that don’t really provide
a moral justification for killing them. We start with a task
that is totally admirable identifying terrorists. We apply a method that
seems neutral and objective: social network analysis. What we end up with is
a system that’s infected by subjective judgment
and risks killing people who are not legitimate
combatants. So what do we learn? Sometimes the way the
technology is designed is itself problematic. It’s not enough to show
that you’re engaged in an admirable project. But we have to also know the
way that you’re accomplishing that goal is not distorting
the goal in the process. The second thing
that I wanted to talk about is the social context
in which we deploy our tools. We’ve seen over and over again
that things that are designed for one purpose can take
on a life of their own. This goes all the way
back to Frankenstein. By the way, it’s the 200th
anniversary of the publication of Frankenstein, as I’m
sure you’re all aware. I’m sure you marked
your calendars.>>I did.>>And often, in hindsight, this
is totally foreseeable, right. We should have seen this coming. So there are two examples of
this that I want to bring up. This picture looks like
something out of Star Trek. It’s, in fact, a picture from
the early 1970s in Chile. After the socialist
revolution in Chile, Pinochet comes to power. And they looked at the socialist
experiments in the Soviet Union where you would have a
bureaucracy at the very top of the country, and it would
tell, they would decide how much of a particular good they
wanted to produce, socks or cars or seat or what have you. And then they would tell the
factories exactly how many to produce. They would give them a quota. This was called a
command economy, economy from the top down. In Chile, they wanted to
use a different model. They wanted to use
socialism from the bottom up. So they created a very advanced, very futuristic information
handling system that would allow factories
to talk to each other, to coordinate their behavior,
so they knew the supply of raw materials; they knew
how much people were producing in the country; they knew where it was being
shipped to and so on. And they called this
process cybernetic synthesis. For short, they call
it Cybersyn. This is a picture of the control
room, the central clearinghouse where this information
passed through. So it was an experiment
in socialism from below. And the idea was to collect
logistics information for the factories and the
other producers in the country and to help them
coordinate their behavior, to give power back
to the workers so they could control the way
that their work and the way that their lives were going. This was supposed to be
the diametric opposite of the way the Soviet Union
ran its economy; however, what we find is that Cybersyn’s
moment of truth came in October of 1972 when it was used
to suppress a strike. You have 40,000 workers
striking in a capital. And the people that were
in charge of Cybersyn, the people that had
access to all of the logistical information
in the country were able to use that information to
subvert the strike. They were able to
use that planning and those coordination
abilities to direct 200 trucks to subvert the 40,000
workers that were striking. That’s a tremendous negation, a
denial of the Democratic power of these 40,000 workers. Think about it this way. They’re outnumbered 200
to 1, 40,000 against 200. That means if you were one of
the strikers, if you were trying to protest for your rights,
you would have to bring more than 200 people to the strike for every one person
that opposed you. Forget democracy as
a tallying of votes. Forget 50% plus one. We’re talking now a
concentration of power where you can be
out number 200 to 1. You could half of
1% of the population on your side and
you get your way. What’s the lesson? The technology built to empower
workers was turned against them in the hands of the powerful. Once you create these
tools, once you turn them over to the powerful, what often
happens is that they use them to cement their control. I’ll give you another example
that is closer to home. After World War I,
the militaries of the world put a lot of
thought and a lot of capital into redoing their
military strategies. The United States
spent a lot of time and energy thinking
about air warfare. And they came up with a plan. The plan was the
B-17 flying fortress. The idea was this: take a
bomber, strap a bunch of armor to it so that it can penetrate
deeply into enemy airspace, it can avoid antiaircraft fire,
and then strap a bunch of guns on it, give it like
seven or nine or 13 guns so that it can even fight off
any fighters that challenge it in the air above
the enemy territory. Now as the war progresses, we
find that the B-17 is not nearly as impervious to aircraft
fire as we had hoped. The solution is to fly it
from a higher altitude. But what that means is
that its bombs are going to be less accurate in turn. What’s the solution? Drop more bombs. We end up with a situation
where the 50-50 radius of dropping a bomb from
this plane was three miles. Now I meant to look up a map
and do this before I got here. But I suspect your campus is
less than six miles in diameter, which means if you were
aiming for this building, you dropped a bomb
aiming for this building, there’s a 50% chance the bomb
wouldn’t even fall on campus. Now if that’s the accuracy of
your weapons, you cannot hope to prosecute a war by
aiming at refineries or ball bearing factories
or railroad yards or the other kinds of targets
that have military value. What’s the United States do? Well, they say we can’t aim for
individual buildings anymore, but we’re pretty good at aiming
at a city’s general location. And that becomes the policy
by the end of the war. The explicit policy
of the United States and other allies was to
aim for city centers, to aim for civilian populations. What does this tell us? There’s was a huge difference between a tool’s intended
purpose and what it does best and what it will
ultimately be used for. So we couldn’t use the B-17
for what we originally hoped. By the end of the war,
we end up using it to intentionally bomb civilians. There’s another great case of how technologies are not
merely neutral but, instead, they have a kind of inertia. They can take on a
life of their own. They can dictate the policy. When you consider the
context of these machines, that’s not terribly surprising. And then my Olive branch
extended across the aisle. Look, regardless of your
political party, you don’t have to go very far back in history
before you find an example of the military doing something
that you disagree with. People on the right and the left
have a reason to be suspicious of the motives and the
activities of the military. If you’re in the
shoes of an engineer, that’s one of the things
that you should consider. Even when these results
are not foreseeable, they do require I think a
healthy dose of caution. So what’s my ultimate thesis? Engineers are partly
morally responsible for what they enable. Now I think that a lot of people
have an asymmetry in their mind. A lot of engineers are
happy to accept praise for the positive
effects that they enable. But at the same time, they
would shy away from a suggestion that they bear responsibility
or that they deserve blame or criticism for the negative
things that they enable. But I don’t think that
asymmetry is defensible. Now we’re talking
about these issues. And the examples that
I’ve used and the stories that I showed you at the
very beginning of this talk, they’re all from the
last couple of weeks. But this is not a new problem. Go all the way back to
Republic, Plato’s Republic, 380 BC in the first
book of Republic. Socrates says he’s trying
to investigate the nature of justice, the nature
of moral goodness. And someone says justice is
giving back what you’re owed, paying back your debts
and keeping your promises. And Socrates says, all
right, here’s an example. Suppose your neighbor gives
you a sword to take care of, like his favorite sword while
he goes away on vacation. And you say sure, I’ll
take care of this. I’ll give it back to
you when you get back. You’re neighbor returns
from their trip, and they’ve clearly gone insane. And they come back to you. They bang on your door in this
murderous rage and they demand to have their sword back. And they tell you that they’re
about to go kill so-and-so. Socrates says, is it
the right thing to do to give this person
back their sword. The answer seems obviously not. So what do we learn from this? Now you have some reason
to give the sword back. It belongs to them. You’re keeping something
that belongs to them. But Socrates still says there’s
a very serious concern here, the concern for enabling
injustice. Notice what Socrates
doesn’t say. Socrates doesn’t say, well,
look, a sword is a neutral tool and it’s up to my neighbor to
decide what they do with it. Socrates says no. And to put his lesson in language that’s
more contemporary, I think this is what
he would say. When we give tools
that enable violence to people whose motives we
have good reason to suspect, we wrongfully enable
unjust behavior. That’s why it’s wrong to give
a sword to your neighbor. That’s why a lot of people
are criticizing Silicon Valley companies for working
with the Pentagon. You’re giving the tools
of violence to people who we can be very
confident are going to do bad things with them.>>And out giving billions of
dollars of weapons to Southeast.>>They would give
an analogous argument against that too,
I imagine, yeah. I want to consider a
couple objections quickly. This is a fairly
sophisticated objection. Suppose you said, well, look, if helping the Pentagon is
an action, then choosing not to help is an action too. And so, if I refuse to help the
Pentagon, then I’m responsible for the effects of not helping. Suppose some other country
that’s less scrupulous develops the same weapons and
we lose some kind of national strategic
advantage to them. Sure, that’s a fair point. That’s an important question. The response, though, for
me is, eh, probably not. There’s probably an
important moral distinction between what you do and
what you allowed to happen. This is controversial
among philosophers, but I think that this makes
a lot of sense for people who are not philosophers. Consider what this would mean in
your general life, for example. Think of all the things that
you’re enabling, all the things that you’re allowing
to happen right now. People are dying
in the third world of easily curable diseases. Little, old ladies
are trying to walk across the street in Sonoma. You’re not right
there helping them. These are things that you’re
allowing to happen right now. But do we really think that
you’re as blameworthy for that as you would be as if you
made these things happen? Almost certainly not. And I think that the engineering
profession can allow this doing/allowing distinction
into their education, into their ethical curriculum, and into their conception
of themselves. So first of all, if there
is really demanding, right, to think that you’re responsible
for what you allow to happen, what that means is, at any given
moment, you should be thinking about the most important thing
you could possibly be doing right now, and you have to do it or else you’re going
to be criticized. That seems really implausible. Secondly, it annihilates
this idea that engineers have a special
responsibility in society. It annihilates the idea that
they have certain roles. They have a place
to play in society because it makes their
moral obligations equivalent to your own. And I think that a lot
of engineers are going to find that implausible. So I think that they
can avoid that. Another objection, my second
objection, I only have two, says that engineers have
this special responsibility to help the Pentagon. And I think this is what
Jeff Bezos was saying. America’s a great country. It needs to be defended,
dot, dot, dot. The conclusion is that engineers
are the people to defend it. That might be true for some
things, but there are a lot of things in warfare,
and autonomous weapons, that a lot of people think are
probably evil in themselves. The word we use is mala in se. You can leave it to philosophers
to come up with a Latin term for something that’s
perfectly coherent in English. Evil in itself. Philosophers prefer
to use mala in se. The idea is that some weapons and methods shouldn’t
be used no matter what. Michael Walzer, a
famous military ethicist, hugely influential military
ethicist, said if the only way that you can win a war is
by committing an atrocity, then you’re not morally
permitted to win the war. Take something like
genocide, for example. Most military ethicists,
probably all of them, except that genocide
is mala in se. It’s impermissible
no matter what. Even if the only way you can win
a war is by committing genocide, you’re not permitted to do it. This is why it was probably
wrong for in the ’30s and ’40s, IBM, to work with the
Nazi regime very closely to help them develop punch cards
that they used to take a census and tabulate the population
of Jews in Germany. Now maybe some of them didn’t
know what they were working on, but a lot of them did, not
just the people at the top, but also people that worked with
the Nazi regime to make sure that the punch cards were
formatted and designed in the way that was
most efficient for what the Nazis
were using them for. Every concentration camp,
every major concentration camp in Germany had its own
IBM tabulation machine. Now do we think that
the engineers working on this project could say, look, a punch card’s just
a piece of cardboard? It’s just a neutral tool? You can use it to take a census
of all the people in Sonoma, or you can use it
to take a census of all the Jews in Germany. It doesn’t make a difference. It’s up to the people
that buy them to decide what they
do with them. I suspect we would not find
that a very compelling defense. Why is that? Because they have developed
a tool that enables violence, and they’ve turned it over to
someone whose motives they have reason to suspect, and they
have good reason to think that they’re using it for something that’s
impermissible no matter what. They’re using it to
assist in genocide. A lot of people think autonomous
weapons are evil in themselves. And it’s not even clear that they could be used
permissibly in any situations. I can talk about that more
in the Q&A if you’d like. So if these weapons are wrong
to use, then they’ve got to be wrong to create because
creating them only makes it more likely that they’re
going to be used. So what’s my conclusion? Well, to take us back to
the beginning of a talk, acting in good faith requires
considering the social context that your tools are
deployed into. Things are not always neutral. They can alter the goals
that we, in fact, accomplish. Even tools for good can
be misused in the hands of the powerful or people
with a history of injustice. And finally, some things
are evil in themselves and shouldn’t be
created no matter what. So if you’re ever
in this position, will you ask what’s this
for, who’s going to use it, who are they using it against,
what mission or goal is it going to accomplish, and is the design
of this tool fit for the job now that we know the design of technology is
not merely neutral? The last thing that
I want to say is that engineers are
people who, by definition, want to make the
world a better place. So I’m not here to slap
hands and to tell you that engineers are morally
bad people or to walk, to storm the Google
headquarters and say, you know, you guys are kind of like Nazis if you haven’t thought
about that. I don’t think that would be
a productive conversation. But I do think that philosophers and other humanities scholars
have something to offer to this conversation,
namely that there are ways of making the world better that
engineers might have overlooked, and there are ways of
making the world worse that they might neglect. Philosophers are people who
have been thinking about how to make the world
better and worse for going on 2500 years now. We have some things
we’d like to contribute. But I do think that, ultimately,
the way forward has got to be a collaborative endeavor
between philosophers and people in the stem professions,
especially when it comes to hugely controversial
issues like the one that we’ve just been
talking about here. Thanks. [ Applause ] We got about ten
minutes for questions; 12:50, right, is the –>>Yeah, yeah. We have plenty of time
for some questions.>>Everyone’s totally convinced. It’s the only reasonable
interpretation, yeah.>>As far as the can opener
example at the beginning, you said it takes away the right and leaves us only
with the left. Could it be argued also that
all it does is gives us the left with also the option
to go for the right? You know, we still have
farm to table restaurants. We still have these
things that the tools that we have don’t take away. It just adds the possibility
to use [inaudible].>>Good. So I do want to say
something about this because, as a matter of fact,
I was thinking about this example
a little bit more. And there’s something
that I neglected to say, something that I should have
said, which is that I think that you’re exactly right. I think that your view
is really plausible. But here’s the worry that
I’m trying to get at. The worry is that
people might adopt that tool thinking it’s neutral
without realizing the kinds of ways that it changes
their goal. So if you adopt a can opener,
if you limited yourself say, this would be an extreme case,
but if you limited yourself only to canned food without
realizing the way that it changes the
food you take in, then you’d be making a mistake. You’d be making an
epistemic mistake, a mistake about your beliefs
and your justification for the beliefs that you have. So the real danger
when people think that technology is neutral is
that they’ll overlook the way that it changes these outcomes. Does that make sense?>>Yeah.>>Yeah.>>We also have to remember
that economic reality, if you are in a certain
economic status, canned food is really
your only option. You don’t have that
farm to table option because you can’t pay $50
a person for dinner, right. But you can pay 25 cents
for a can of beans, right. So that’s really what’s going — You’re going to force a
certain group of people into the can opener world. And sure, rich people
have an option. Our middle-class people
have an option [inaudible]. But that’s not the reality
for most of the people.>>Yeah. That’s a
better response. Yeah?>>Would you say that most
technologies are not neutral, or are there some that
could be considered neutral and don’t change the goal?>>This is a really
important question. And I’ve wanted to temper
my view, to moderate my view as I make this point
because without sitting back and getting a comprehensive
list of every technology and then thinking about it
carefully, I wouldn’t commit to the view that every
technology perverts or changes the goals
that you accomplish. I wouldn’t deign to say that. I think that the
technologies that are really — Well, I think a lot
of the technologies that are most influential
and widespread in society are not neutral. And I think that’s
why philosophers of technology are often pounding
their fist about this point. Look at something
like the automobile. Look at something like
computers or the internet. You could say similar
things about all of those. They drastically change
the texture of our society, the things that we’re able to do
or likely to do and the things that we’re unable to do
or the kinds of activities that become less common for us. So I think that there’s a reason
why philosophers have spent a lot of time talking
about those more popular, widespread technologies. The can opener example, I
like because it’s so mundane. And it’s the kind of thing
that people would overlook with the thought that it’s
totally innocuous or neutral. But yeah, I’d have to think
more to think about a technology that is, you know,
totally, purely neutral. But I’m sure they’re out there. It might even be the
majority of technologies. But the ones that are not
neutral are significant enough, widespread enough and have
important enough consequences that they deserve
a lot of attention. You have your hand up, right? You’re good.>>These autonomous
weapons are obviously going to cost a lot of money. It’s foreseeable
then that superpowers like the United States could
afford an autonomous army or weapons system that
then removes a human aspect from warfare. So I can foresee the United
States having a completely autonomous army fighting a
completely humanized army. And at that point,
our risk changes because we’re not actually
risking human lives, and yet we’re engaging
in warfare. I’m wondering about some of the
ethical implications of that. And does that just get tossed up
to the more advanced army wins, or it’s changing the face of
warfare and what that means?>>It’s definitely changing
the face of the warfare. But you’ve seized on one
of the camps in this debate that has a very powerful, very
intuitively appealing argument, which is autonomous
weapons will reduce risks to American soldiers
or save American lives. And for that reason,
it’s obvious that it’s morally required
that we adopt them. Now this is why you find people
pushing the mala in se point because they say if the
weapon is intrinsically evil, then it does not matter whether
you could save lives by using it because it’s never
permissible to use, period. A lot of people —
This debate kind of parallels the debate
over nuclear weapons. So it’s easy to imagine a case
where using a nuclear weapon to end a war actually reduces
the total number of casualties from that point going
forward, right. But the responses, nuclear
weapons are mala in se. They’re evil in themselves. It’s never permissible to burn
to death tens of thousands of civilians in their
sleep regardless of what else is on the line. So you find this stark
divide between ethicists on exactly this question. And I think that the
military ethics profession and the international
humanitarian law community is going to have to answer some —
They’re going to have to grapple with some very significant
questions that go to the bedrock of
their discipline. The Red Cross, for example, or international humanitarian
lawyers, should they think that the law functions to
reduce the number of civilians who are killed in war, or
should we think that even if autonomous weapons
would protect civilians and would also protect
soldiers would reduce the number of soldiers killed in war,
that there’s something that’s so intrinsically
problematic about them that they’re still
wrong going to use? Now this debate is
excruciatingly difficult, and it’s difficult because
it parallels a deeper debate in moral philosophy that’s
at least 250 years old. And we’re still working
out that one. And the implications of that
debate are going to have to inform the way that we
think about autonomous weapons. But these are the two
groups that we have. Should we think warfare is about
reducing the number of people who are killed overall,
or should we think that there are some
things that are so heinous, so disrespectful — In
the words of one author, they would turn warfare
into pest control, right, to your point about
changing the nature of war. They turn the enemies
into vermin, and they turn warfare
into pest control. There’s something so
deeply troubling about that, that they’re impermissible
even if they save the number of lives that are lost. That’s a hard question, but
that’s how the landscape looks. Time for one more maybe. Yeah?>>So you ended with kind of
like saying that humanities and like the engineering sides
should collaborate together to make these autonomous
weapons. But what does that mean kind of? Like is that sort
of giving engineers like a philosophy
class requirement or they get the degree or what?>>I think it means
at least that. I think it means at least
requiring some sort of, maybe not a philosophy
class, but an ethics class. I should say an ethics class
in a philosophy department or taught by someone with
credentials in philosophy because philosophers
think very differently about these questions
than someone outside of the humanities would. But I was even asking for something more
profound than that. And I think that there’s a
model elsewhere in ethics that gives us a really
inspiring model to look at. And that’s the model
of bioethics. So in I think the ’60s and
’70s, doctors became alarmed at the kinds of studies
that they were running, the kinds of studies that were
being published and the things that doctors were doing with or
without their patient’s consent. And you find in the early ’70s
philosophers poking their heads into the medical
profession and saying, look, we’ve been thinking
about questions of ethics for thousands of years and
you’re doing some things that are very seriously wrong. Questions about euthanasia, for
example, questions about consent or do not resuscitate orders
or something like that. And in the ’70s, you literally
find ethicists shadowing doctors in hospitals, following
them on their daily rounds, understanding the kinds
of problems that they face and then helping them
think about those problems or helping them work
through them. And out of that, we had
a tremendous blossoming of knowledge that created
the bioethics discipline very recently. It’s a huge success story. And I would love to see
something like that take place in engineering ethics. I’d love to see literally
a collaboration between philosophers
and engineers where you have them working
together on a daily basis so that the two of them can
understand their worlds better. There’s a famous
lecture delivered in the early ’50s
called the two cultures. And the idea was the stem
people lived in one society, the philosophers and the
humanitarians live in the other, and they don’t talk
to each enough. We have the same problem
today, 66 years later after those lectures were given. And I’d love to see us start
bridging that gap proactively.>>Sounds like that is
a perfect place to stop. So thank you very much.>>Yeah. Thank you [applause]. Thanks. [ Music ]

Leave a Reply

Your email address will not be published. Required fields are marked *