The New Rules of Robot/Human Society | Off Book | PBS Digital Studios


[MUSIC PLAYING] [MUSIC PLAYING] Ethics applies in any context
in which you introduce robots. And it pervades really
everything we do. Robots are entering into a
lot more areas of our lives than ever before and taking
on different social roles. I love robots. I want to build robots. But I think we can’t be naive
about the possible harms and repercussions that
we don’t even expect. Because that will affect
real people’s lives. [MUSIC PLAYING] I think we have a very
powerfully ambivalent relationship to robotics. And what I mean by that
is, on the one hand we’re drawn to
them very strongly. They’re very compelling
and captivating as objects. But they also can be
very scary in many cases. [GUNSHOTS] There’s a host of concerns
about robotic systems. Robots really
represent the ability to have roaming, roving
cameras that are capturing all kinds of information. All of the outdoor
space, public space, places that were not observed,
are going to be observed. And that data is all going
to be captured and recorded. And we really don’t have any
kind of laws and regulations about how that can be utilized. The other thing that
makes these very different is the fact that
they move around. These are physically
engaged with the world. And that means they can do a lot
of harm in the physical world. And Amazon is talking about
these quadrotors or octorotors that are going to be delivering
packages to your front door. My first concern is, well,
that’s eight lawn mower blades coming down for a
landing in front of your house, especially if it’s
carrying a box of books. Then you have a whole
host of legal issues around who’s responsible when
these things do some harm. So there’s the person
who maybe owns the robot. There’s the person who
programs it or maybe tells it what to do. The idea of who really shares in
these kinds of responsibilities becomes very critical,
especially when we think about
technological innovation. Because a company that
manufactures a robot, if they hold all
of the liability, then that’s a really
big impediment to them bringing to market
these different technologies. But if we shift the law so that
they don’t have any liability or we greatly limit the kind of
liability that they can have, we could wind up with a lot
of really dangerous robots. So we really have to kind of
find a balance between these. [MUSIC PLAYING] Asimov’s three laws of robotics
are pretty straightforward. A robot should
obey a human being. A robot shouldn’t do any harm. And a robot should then engage
in effect self-preservation. But if you actually
read Asimov’s stories, in nearly every story, something
goes awry largely because of these laws themselves. So what happens if two humans
give orders to the robots and they’re conflicting orders? What should the robot do then? He illustrated how just a simple
rule-base morality does not work. [CRASHING NOISE] Robots are going to encounter
some hair-raising scenarios. With self-driving cars on a
narrow bridge and a school bus is headed toward it, what
does it do Does it crash into the bus? Or does it drive off
the bridge, taking you and the other passengers in
the car to immediate death? In the ancient times, we used
to believe that being moral was to transcend all
your emotional responses and come up with the
perfect analysis. But actually, an
awful lot more comes into play, our ability to
read the emotions of others, our consciousness,
our understanding of habits and rituals and the
meaning of different gestures. It’s not clear that
we know how to get that kind of understanding
or appreciation into those systems. So will their analytical
tools be satisfactory? They may win the
game of Jeopardy. And the danger of
that is that will make us liable to attribute
levels or kinds of intelligence to them that they do not have
and may lead to situations where we become
increasingly reliant on them managing tools that
they won’t really know how to manage when
that idiosyncratic and truly dangerous situation arises. I think the revolution in
military robotics has been I think just the
widespread adoption and use of the unmanned aerial
systems by the US military. As these things
make mistakes, we don’t really know who’s
necessarily controlling them. If they’ve been programmed,
again, who’s responsible? So it becomes much more
easy to distance yourself from the responsibility. And I think in the case
of autonomous systems, it’s a really big question. Because if these
things accidentally kill people or commit
something which, if it was done by a human, we
would consider it a war crime, now it’s being
done by a machine, so is that just a
technological error? Or is it a war crime? In legal terms, that’s
really about intention to commit a war crime. Otherwise, it’s just
sort of a mistake, which is very different
than in product liability. So if you make a mistake
with product liability, there’s a lawsuit
and the company still has to pay even though they
didn’t intend the harm. But in war, that’s not the case. I think robots are not a
threat in and of themselves. I think we have to worry
about how they’re used. And we have to design them and
build them very responsibly. But I think we can’t be naive
about the possible harms and repercussions and
the ways that they may be used that we don’t
approve of or the ways that we don’t even expect, so
relieving us of responsibility for our actions because
we can point to the robot and say, well, it’s
not my responsibility. I think that’s a kind of
abdication of responsibility. It’s a threat to
who we are as humans and how we develop as a society. What interests me is
how humans are starting to interact with robots
that views robots not so much as objects
but as lifelike things and the ethical questions
that come along with that. Will you be my friend? Sure thing, Martin. Anthropomorphism
is our tendency to project human-like
qualities on animals or life-like
qualities on objects. And the interesting thing about
robots, particularly social robots, is that just the fact
that we’re having something moving around in
our physical space that we can’t quite
anticipate lends itself to this projection. And we start to
name these object or give them a gender or
ascribe intent or states of mind to them. We’ll even feel bad for them if
they get stuck under the couch. And so we start to perceive
these objects differently than we do other
objects like toasters. This can get actually
pretty extreme. So I did a workshop where
my friend [INAUDIBLE] and I gave people
little robotic dinosaurs and had them play with them. And then we asked them
to torture and kill them. And they had a lot
of trouble doing it. They basically refused
to even hit the things. So anthropomorphism
can go a long way in how we’re willing
to treat things. Even with the primitive
social robots we have now, there are YouTube
videos of robots being tortured or treated violently. And the comments
underneath these videos are very polarized. And some people already get very
upset by this type of behavior. We’re not at this stage
yet in robotic development that robots could actually
experience something that’s anywhere near to
how we imagine our own pain experience. So the question is not do
the robots actually feel it when you hurt them? The question is more, what
do we feel when we hurt them? One could imagine especially
a child who doesn’t really understand the
difference between a cat and a robotic object
becoming either traumatized or desensitized
to this type of behavior. If our research shows that
violent behavior and hurting robots translates to treating
other humans in a violent way or turning off parts of
us that are empathetic, then yes, we probably
should prevent people from hurting robots. If we put a red line
in the sand right now, that doesn’t mean
that we will never see terminator-like creatures. But it means we’re
trying to design in the appropriate
ethical considerations in today’s systems. It’s going to be more important
what we project onto robots and how we interact
with them and what roles they play in society. And that’s going to drive what
moral status we give them, essentially. And I think that’s why
it’s important for us collectively to also think
about who we are as humans and how we develop as a society. [MUSIC PLAYING]

40 thoughts on “The New Rules of Robot/Human Society | Off Book | PBS Digital Studios

  1. Very well said by all in the video. The subject is constantly changing and developing, but overall it does come down to how do we as humans take responsibility for something that does not even know the definition of the word. 

  2. Robotics is a really exciting field and I really do want autonomous cars, but the issue is what is highlighted in this video. In iRobot, a robot does not save a little girl and instead saves a man because he had a higher survival rate. Medical staff have to do the same thing when at a location with multiple injured. Perhaps you could program in something that would have let the man opt out so that the girl could have lived. Also "Time" remix at the end, nice.

  3. Reminds me of Terry Pratchett's Feet of Clay.

    'I Suggest You Take Me And Smash Me And Grind The Bits Into Fragments And Pound The Fragments Into Powder And Mill Them Again To The Finest Dust There Can Be, And I Believe You Will Not Find A Single Atom Of Life-'
    'True! Let's do it!'
    'However, In Order To Test This Fully, One Of You Must Volunteer To Undergo The Same Process.'
    There was silence.
    'That's not fair,' said a priest, after a while. 'All anyone has to do is bake up your dust again and you'll be alive…'
    There was more silence.

  4. I think the question we should ask is: How do we regard the destruction of property? Most sane, productive folks see property damage as unproductive and immature. There are already court cases of damage to robots where robots are defined as mere property and offenders charged as such. Many groups see the damage of property as an extension of their freedom to express themselves, outlawing these acts simply creates more criminals. With drones and tactical robots the burden of proof rests on not one person but many, it is whole sectors of society which are complicit in the murder of human lives. It is human behavior that must progress. Forcing people to be egalitarian and altruistic is zero sum game.

  5. I think it says a lot about the subject at hand that I had a hard time watching people smash what are essentially toys with a sledgehammer. 

  6. This kind of mushy fantasy is often produced by romantic geeks and teens who have yet to graduate into the real world.

    They think they are in the forefront – or that somebody in their culture cares about these "rules of robotics".

    Meanwhile, robots have been weaponized some 60 years ago. What is a self-guided missile if not a robot? Sorry to burst your bubble, but the military-industrial-complex has you.

  7. In 90's computers were for nerds then since 2004 they became cool thank to social networks, the same gonna be with robots between 2020 and 2035, so the market will explode.

  8. It's interesting how all these ethical implications seem so simple to us, yet to robots they mean nothing but a few lines of code. How will it all translate in the future?
    Great video!

  9. god dammit PBS.. I grew up watching you you "brand-name" mofo. But to this day even after the analogue to digital/non live transformation you make the BEST content available. Period. PBS space-time, game show, idea Channel "off" something , Jesus people who works the quality control sector of the company. it's amazing! you never fail to deliver real content that matters without compromise. and you don't fall by the wayside of everyday boring documentary style delivery of your info. it s always engaging and comprehensive.

    damn PBS. back at it again with the socially self aware yet still cool as fuck teaching.

Leave a Reply

Your email address will not be published. Required fields are marked *