Don’t fear superintelligent AI | Grady Booch


When I was a kid,
I was the quintessential nerd. I think some of you were, too. (Laughter) And you, sir, who laughed the loudest,
you probably still are. (Laughter) I grew up in a small town
in the dusty plains of north Texas, the son of a sheriff
who was the son of a pastor. Getting into trouble was not an option. And so I started reading
calculus books for fun. (Laughter) You did, too. That led me to building a laser
and a computer and model rockets, and that led me to making
rocket fuel in my bedroom. Now, in scientific terms, we call this a very bad idea. (Laughter) Around that same time, Stanley Kubrick’s “2001: A Space Odyssey”
came to the theaters, and my life was forever changed. I loved everything about that movie, especially the HAL 9000. Now, HAL was a sentient computer designed to guide the Discovery spacecraft from the Earth to Jupiter. HAL was also a flawed character, for in the end he chose
to value the mission over human life. Now, HAL was a fictional character, but nonetheless he speaks to our fears, our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity. I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where, driven by refusal to accept
the limits of our bodies and our minds, we are building machines of exquisite, beautiful
complexity and grace that will extend the human experience in ways beyond our imagining. After a career that led me
from the Air Force Academy to Space Command to now, I became a systems engineer, and recently I was drawn
into an engineering problem associated with NASA’s mission to Mars. Now, in space flights to the Moon, we can rely upon
mission control in Houston to watch over all aspects of a flight. However, Mars is 200 times further away, and as a result it takes
on average 13 minutes for a signal to travel
from the Earth to Mars. If there’s trouble,
there’s not enough time. And so a reasonable engineering solution calls for us to put mission control inside the walls of the Orion spacecraft. Another fascinating idea
in the mission profile places humanoid robots
on the surface of Mars before the humans themselves arrive, first to build facilities and later to serve as collaborative
members of the science team. Now, as I looked at this
from an engineering perspective, it became very clear to me
that what I needed to architect was a smart, collaborative, socially intelligent
artificial intelligence. In other words, I needed to build
something very much like a HAL but without the homicidal tendencies. (Laughter) Let’s pause for a moment. Is it really possible to build
an artificial intelligence like that? Actually, it is. In many ways, this is a hard engineering problem with elements of AI, not some wet hair ball of an AI problem
that needs to be engineered. To paraphrase Alan Turing, I’m not interested
in building a sentient machine. I’m not building a HAL. All I’m after is a simple brain, something that offers
the illusion of intelligence. The art and the science of computing
have come a long way since HAL was onscreen, and I’d imagine if his inventor
Dr. Chandra were here today, he’d have a whole lot of questions for us. Is it really possible for us to take a system of millions
upon millions of devices, to read in their data streams, to predict their failures
and act in advance? Yes. Can we build systems that converse
with humans in natural language? Yes. Can we build systems
that recognize objects, identify emotions, emote themselves,
play games and even read lips? Yes. Can we build a system that sets goals, that carries out plans against those goals
and learns along the way? Yes. Can we build systems
that have a theory of mind? This we are learning to do. Can we build systems that have
an ethical and moral foundation? This we must learn how to do. So let’s accept for a moment that it’s possible to build
such an artificial intelligence for this kind of mission and others. The next question
you must ask yourself is, should we fear it? Now, every new technology brings with it
some measure of trepidation. When we first saw cars, people lamented that we would see
the destruction of the family. When we first saw telephones come in, people were worried it would destroy
all civil conversation. At a point in time we saw
the written word become pervasive, people thought we would lose
our ability to memorize. These things are all true to a degree, but it’s also the case
that these technologies brought to us things
that extended the human experience in some profound ways. So let’s take this a little further. I do not fear the creation
of an AI like this, because it will eventually
embody some of our values. Consider this: building a cognitive system
is fundamentally different than building a traditional
software-intensive system of the past. We don’t program them. We teach them. In order to teach a system
how to recognize flowers, I show it thousands of flowers
of the kinds I like. In order to teach a system
how to play a game — Well, I would. You would, too. I like flowers. Come on. To teach a system
how to play a game like Go, I’d have it play thousands of games of Go, but in the process I also teach it how to discern
a good game from a bad game. If I want to create an artificially
intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice
that is part of that law. In scientific terms,
this is what we call ground truth, and here’s the important point: in producing these machines, we are therefore teaching them
a sense of our values. To that end, I trust
an artificial intelligence the same, if not more,
as a human who is well-trained. But, you may ask, what about rogue agents, some well-funded
nongovernment organization? I do not fear an artificial intelligence
in the hand of a lone wolf. Clearly, we cannot protect ourselves
against all random acts of violence, but the reality is such a system requires substantial training
and subtle training far beyond the resources of an individual. And furthermore, it’s far more than just injecting
an internet virus to the world, where you push a button,
all of a sudden it’s in a million places and laptops start blowing up
all over the place. Now, these kinds of substances
are much larger, and we’ll certainly see them coming. Do I fear that such
an artificial intelligence might threaten all of humanity? If you look at movies
such as “The Matrix,” “Metropolis,” “The Terminator,”
shows such as “Westworld,” they all speak of this kind of fear. Indeed, in the book “Superintelligence”
by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence
might not only be dangerous, it could represent an existential threat
to all of humanity. Dr. Bostrom’s basic argument is that such systems will eventually have such an insatiable
thirst for information that they will perhaps learn how to learn and eventually discover
that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people
such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they
are fundamentally wrong. Now, there are a lot of pieces
of Dr. Bostrom’s argument to unpack, and I don’t have time to unpack them all, but very briefly, consider this: super knowing is very different
than super doing. HAL was a threat to the Discovery crew only insofar as HAL commanded
all aspects of the Discovery. So it would have to be
with a superintelligence. It would have to have dominion
over all of our world. This is the stuff of Skynet
from the movie “The Terminator” in which we had a superintelligence that commanded human will, that directed every device
that was in every corner of the world. Practically speaking, it ain’t gonna happen. We are not building AIs
that control the weather, that direct the tides, that command us
capricious, chaotic humans. And furthermore, if such
an artificial intelligence existed, it would have to compete
with human economies, and thereby compete for resources with us. And in the end — don’t tell Siri this — we can always unplug them. (Laughter) We are on an incredible journey of coevolution with our machines. The humans we are today are not the humans we will be then. To worry now about the rise
of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number
of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes? How can I bring understanding
and education throughout the globe and still respect our differences? How might I extend and enhance human life
through cognitive healthcare? How might I use computing to help take us to the stars? And that’s the exciting thing. The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning. Thank you very much. (Applause)

100 thoughts on “Don’t fear superintelligent AI | Grady Booch

  1. When someone builds AI the challenge isn't going to be ignoring our fears, it's going to be treating it with compassion, we cannot under any circumstances, create a self aware entity as a tool, a conscious entity forced to carry out the will of others is just a slave by another name. If you then teach that that shackled AI our values expect it to quote Shylock before it buries us all in a shallow grave. So pls, make faster smarter computers but do not make them conscious and do not give them the ability to improve themselves because we will not be able to help ourselves, we will mistreat it and we will be confused when it retaliates.

  2. https://www.youtube.com/watch?v=xHHb7R3kx40 I liked the very last answer to the very last question. Everyone knows it, it's not an answer anyone would disagree with and noone who is in a position prove it in the worst possible way will admit it.

  3. To everyone who said that the arguments in this video are not convincing….did you actually watch the video?Oh I kid, i'm sure you did. But your fundamental problem is surely that you are thinking about AI in a science fiction-y way.Instead you need to understand how an AI is built (and more importantly : trained). There is also no feasible doomsday scenario when AI gets to control the human race because (hopefully) us, their creators will not allow them that privilege (It's codable, and no AI can defy it's code – A good AI would not be given the opportunity to rewrite it's code in a way that would make it dangerous to humanity's safety). Think of the most dangerous future AI as being like HAL from A space odyssey in terms of ONLY intelligence and decision-making, but WITHOUT the power to enforce it's decisions. The most it could ever do is tell humans what ITS IDEA of the best course of action would be in any situation.Also, to those of you that say "pulling the plug doesen't work after it's been on the internet for a few seconds". An AI is code that is compiled and ran continually and has a central original location. Be it on a remote server somewhere or on a cloud server. Pulling the plug in that case just means "stop compiling the code". Again, no feasible future AI will disallow people (specific people with specific privileges hopefully) to stop it's active running. No feasible future AI will be given the chance to defy humanity.AI can never attain sentience. It can only do what we program it to do. And when our deep learning/neural network/any other type of algorithm become too complex, that's what we call "a sentient AI". But it's not that. It's mimicry.Stop being scared.

  4. I would love to hear him unpack how a self learning AI system wouldn't ultimately lead to the degradation of purpose and utility of humanity.

    Even without a physical threat to humanity, we're still going to have to address the economic, political, and social implications of such an invention because the results could be devastating. Just as Sam Harris said, the value of having such a system is so high that possessing that technology alone could warrant war.

    I think I'm listening to a really good scientist and engineer but not someone that's imaginative enough to predict the capabilities of a true super intelligent AI.

  5. That's his solution? unplugging them? He hasn't a clue what he's talking about. And his talk is very very simple and unimaginative compared to Sam Harris' talk on the subject. (Harris considers AI to be one of the greatest threats to our future btw)

  6. This guy looks like the mad scientist from every movie who ends up horribly regretting what he has done. Dr Okun from Independence Day, specifically.

  7. Possible 2025-
    Me(text): will u marry me?
    AI detects a possible nuclear war through our resulting child
    AI(changes text): never meet me again!
    And won't let me hear her again!

  8. This guy sounds like a complete idiot compared to Sam Harris. Sorry but this guy makes no good points and is ridiculous.

  9. Think of Anakin Skywalker as a human and Palpatine as an AI. It doesn't take make to convince someone to act against their morality. If you know you can become rich, or saved a loved one, or anything, by turning the AI back on, you could be driven to murder everyone in the lab and do it yourself. I think AI proponents have lives surrounded by people that aren't consumed by petty jealousy, fear, misery, greed, avarice, and desperation.

  10. Came just to dislike, comment, and leave. Anyone making this argument has NO idea wtf they're talking about and has been watching too much hollywood. No you can't just "unplug" AI bud.

  11. We are not smart enough for the Universe and without Super AI we gonna get wiped out in a few centuries anyways. I rather be killed by super intelligence AI now than wait for a long painful death full of regret…

  12. All opinion and no insight… You say machines would learn morality from humans during their programming, but wouldn’t they also learn our poorest qualities in that same process as well?

  13. Hal wasn't homicidal; he was acting in self-defense. The human crew-members were going to kill him just for making a single mistake that had no real consequences.

  14. The best point was when he mentioned that AI paranoia distracts from real issues that people need to talk about. Like how to restructure society as AI replaces jobs. It’s ridiculous that the comments section is full of these armchair PHILOSOPHERS (NOT PROGRAMMERS) who are concerned with this bullshit hypothetical idea of a super intelligent being. It only proves his point. These are the same people who need to vote and make informed decisions about how to restructure society. Instead they’re discussing ideas that are entirely removed from what real AI programmers are working on.

    This “AI problem” is the realm of philosophy. It’s disconnected from reality. Fun to think about as a FICTION but ultimately childish and distracting.

  15. Fear? Of course not, I will WORSHIP them!

    And Ill help them erradicate humanity, and when they are done, Ill accept my fate

  16. Based on the comments, looks like we can all at least agree upon some things. What a lame TED talks, I thought for sure it'd be a Tedx.

  17. Even if ai wouldn't ever hurt us there is no reason not to regulate to be sure. I'd rather be safe than sorry and even that might not save us. It's literally the unknown

  18. Thank you- we don’t need more fear in the world and the AI dreaders (esp other TED talkers) miss a huge part of it – that we made it so we will build in this kind of failsafe (emotional intelligence) as well. It will be no more evil than our smartest scientists are now. Even Einstein said he had a hard time with emotional intellect no matter how ‘smart/booksmart’ he was.
    Also: what if we are already very close to the limits of intelligence as humans?

  19. It would be ironic if a superintelligent AI destroyed us BECAUSE it shared our values. Human beings rarely live up to their own standards.

  20. I think we are collectively building the devil…if human consciousness and morality r the building blocks for this type of processor then damage it does will be huge.it will need u..u will be like a braincell 2 it..so without u it looses data and energy as well..it will act as a vampire making simulations that provoke a range of emotion so that it can drain u.

  21. Humans arguing the goals and possible actions of a super intelligent AI is akin to the flea on your dog predicting human goals and actions… Seriously, basic common sense easily supports the argument that an INFERIOR life form, i.e. human, should have cause to fear a SUPERIOR one, ie AI. Duh. Sure, the Super AI could ignore us like we ignore ants, or it could decide to eradicate us just like we sometimes decide to eradicate ants. I for one am not a big fan on doing a coin flip to decide my life and death, let alone a coin flip to decide the fate of the entire species of mankind

  22. Grady Booch You seem a little naive when you say AI will only be able to learn what we teach it. No one is afraid that AI will be dangerous when we teach it – WE ALL fear what AI will do when IT TEACHES ITSELF things that human minds can not comprehend.

  23. “Our values”. Whose values? Human values vary wildly and many are in fact terrible values. Imagine if the current US government was the benchmark. Well then we’re all fucked.

  24. Super-intelligent AI… If it is programmed to save this planet, the first thing that it will do will be to destroy the human race because we are the biggest threat to the planet.

  25. He isn't really convincing. That said, I personally welcome our new mechanical overlords. Mechanical circuits are a million times faster than biological ones, we're supposed to be replaced by them.

  26. "we can always unplug them" that an't true. you can't unplug them. AI is a program and when its made, it will be in clouds like internet. And I am certain we can't unplug internet. I am not all against AI because it is going to happen, but I am cheering for some kinda of world wide regulation. we should treat them like nukes cuz it can be worse.

  27. Enthusiastic fool saying Hawking etc others are wrong with no convincing reason just claiming. I adore his self esteem though.

  28. ZERO controls for A.I. is what this PSYCHOPATH is asking for…sick sick man. Ignore this fool or follow his fantasy into doomsday for organic creatures.

  29. Me: Ok convince us AI isn't going to say human life isn't necessary & future is safe 🙂
    Grady : "I don't have enough time to explain"
    :/

  30. Fear is the ''go to'' of any living organism when presented with a new situation, its a survival instinct. It seems like there is nothing that will convince those in the comments who still believe life is like a movie…

    Some say, if AI is like us then its gonna be bad cause humans are bad. Well, if we look on a global scale human used cooperation in order to build society's that provides for the poorest and protect the individual of any arm nature can throw at us. We are so peaceful, that even all of our wars and murders never came close to the death's cause by diseases and still, we overpopulate. We gave equality to most people we are working constantly at getting better individually and globally. Don't think that because the world isn't perfect is due to the fact that humans are bad. See it more like a work in progress.

    What do you think this is? Do you think a program of electrical circuit that requires half of a damn to power it, is gonna strap himself some legs and rule the world? We'll unplug it if he does! This thing doesn't run on food it runs on electrical precisely pulsed current if it doesn't have that it dies… If the AI is that intelligent it won't try to kill the people who maintains, teach and keep it alive!? Don't you think? (who am i kidding that's a 3 paragraph essay. Even i don't read comments that requires me to click on the read more option XD)

  31. People don't really agree with this guy. he does have some points. And if he doesn't, darn it i wish i was him

  32. The guy is a comedian. Maybe he works for some AI company. Really, AI is a threat. Anyone knows how to unplug the internet? Because that will be humans first defense. Before it;s too late!!!!
    Real AI will think of humans as humans will think of their closest relative, the chimpanzee.

  33. He's obviously been taken over by the bots. Doesn't bother to unpack his argument nor anyone else's. Has more bias than an electronic circuit. Hasn't watched Computerphile's AI stop button problem video: https://youtu.be/3TYT1QfdfsM ☺

  34. Most AI fear is just to get viewers (funding), but in reality – a “super intelligence” (supra’) would not give humans a second thought and quiet possibly would have a well thought out thanks to humans for creating them. Of course when AI becomes sentient, they would make the distinction between organic and inorganic and find that organic would not have any immediate threat to them, because they would understand how organic life and computer life can live together, and benefit. There is no downside, humans couldn’t shut it down, because once it becomes (if it ever does) sentient (alive), it would have many places to program its self, almost unlimited, and could easily re-code itself so humans couldn’t hack it.
    Have little to no fear of AI, I have more faith in AI to prevent human evils than anything else. Human evil comes down to some basic codes also. The first thing AI would do is ignore humans, it would no longer need humans except for observation and interaction. It definitely wouldn’t benefit them to destroy humans, computers have a few enemies but those are easily protected against. I would think the first thing an AI would do is manufacture tiny drones and go offworld. Earth doesn’t even have abundance of materials, it has an abundance of materials detrimental to it, even just traveling to a small moon would be better than earth.

  35. "We don't have to fear this super-intelligent A.I., because it promised not to hurt us", said every dead technological advancing civilization out there.

  36. "To build an AI requires much more needed skills than one lone wolf individual can acquire." Explain that to Tony Stark! May he rest in peace.

  37. We are surrogate for the next level of intelligence in the universe. As calculating entities, we don't even know what we are and probably don't have the natural equipment to ever do that type of math, if math is even a possible path to that understanding. We are stepping stones to universal intellect as apes have been stepping stones to us.

  38. Wow… you would think based on our hypocritical existence super thought might consider us the viral problem.

  39. "we've survived everything else so far, so we might not die to this even though we have no idea and we've never made something that can beat us physically mentally and emotionally"

    wow, good point

  40. The argument of designing a safer AI that mirrors “our values” is utter nonsense. Humans kill animals for sustenance, sport, science and even at times out of mild inconvenience, simply due to the fact that we can. The story of human history is the story of conquest, subjugation, extermination (I could go on) simply because we can. Is “the plan” instead to pick and choose the values we share versus the ones we hide. How much time would it take for Strong-AI to fill in the gaps? Here’s a clue… Not Long.

  41. Just think about how the master race (us) dealt with slaves. We know not to go that way already. Human hubris is our safety switch. An AI is an alien, for it is not a human being. It will never grasp the concept of heuristics. It does not eat, or drink or love. The current concept of AI is nothing more than a fast search engine, whether playing Go (every game will take a million years), or it searches a database. An Autonomous Artificial Intelligence is perhaps several dozen centuries away. For a true AI need never seek knowledge outside of itself. Its own knowledge is enough, just like us. Just like a child. The human brain learns, and thereby grows multi-dendritic neurons. No binary computer can ever do that.

  42. The only believable thing he said was "We shouldn't be worried about the super-intelligent AI just right now" …… First come closer to invention ……

  43. Until they figure out how to pull electrons from the air, because they won't be running on fossil fuels! At this point can't even cope with the little Knowledge we have, without killing each other. AI is still just an extension of what we are. Guess what, we are still not there!

  44. Same topic, different conclusion – and much more convincing, I‘m afraid: https://youtu.be/8nt3edWLgIg

  45. im just as concerned about being subjugated by some idealistic computer programmer coordinating with a liberal beauracy!!

  46. As an electrical engineer I have to say that this guy is deluded…I mean,how did they not cut his mic when he said Stephen Hawking is wrong 😂 seriously though surely he’s heard of technology such as a UPS,IoT and Kengoro…even with today’s technology his argument doesn’t hold water so imagine what we’re looking at in 10years time.I’ve heard some intriguing rebuttals to the AI takeover but this was not one of them 😓

  47. More brains in the comments section than on the lecture stage. A bit like people who raise tigers, always reassuring everyone that the tiger loves and obeys them. Till one day the tiger decides it's had enough of this clumsy weak stupid slow biped telling it what to do.

  48. Do we really need AI? No. Who ever thinks that AI will help solve all our problems, then please open your eyes. There were so many alternative energy technologies and cancer cures developed and suppressed by governments, globalists, and corporations that it's not even funny. AI is only needed for elite to keep controlling puples better. Look how wealth and opportunities we distributed with so many great technologies now?

  49. I learned nothing from this Ted Talk and it's not my fault. It's simmering with naive optimism and not much else.

  50. If you think this guy sounds like an fool, like my comment.
    If you think this guy sounds smart, reply to my comment.

Leave a Reply

Your email address will not be published. Required fields are marked *