Conscious machines: How will we test artificial intelligence for feeling? | Dr. Susan Schneider

So the ACT test actually looks at the AI to
see if it has the felt quality of experience. So we’ve noted that consciousness is that
inner feel. So it actually probes the AI by asking questions
that are designed to determine whether it feels like something to be the AI. And I actually published the questions in
my book, some of the questions. And they’re questions that are actually philosophical
in nature in some cases, or even that are inspired by religious traditions. So one example is you would ask the machine
about whether it understands the idea of reincarnation. So even if you don’t agree with reincarnation,
you can vaguely understand the idea of your mind returning. Similarly, you can understand the idea portrayed
in the film Freaky Friday of swapping a body with somebody else. You can also understand the idea of the afterlife. Even if you disagree with these ideas, and
you think that they’re ultimately not well founded, the point here is that the reason
that we can think of these things at all, the reason we can entertain these thought
experiments is that we’re sentient beings. It would be very difficult to understand what
these thought experiments were getting at if we weren’t conscious beings. Similarly, think of a machine that is at the
R&D stage. So it hasn’t been spoon fed any information
about human consciousness whatsoever. If at that point, we detect that it grasps
these questions, it understands the idea of the mind existing separately from the body
or the system or the computer, then there’s reason to believe that it’s a conscious being. Now this being said, the ACT test only applies
in very circumscribed cases. So first off, you can’t pre-program answers
into the machine. So it would be inappropriate to run the test
on a system like, say, Hanson Robotics Sofia, which has stock answers that she goes through
when she’s on TV shows. I’ve done TV shows with Sophia. I’ve noticed that she uses the same answers. They’re programmed in. So that wouldn’t do. Also, you can’t have a deep learning system
that has been spoon fed data about how to go about answering these sorts of questions. Also, the system has to have linguistic capacities. It has to have the ability to answer the questions. Another test for machine consciousness is
the chip test. The chip test actually involves humans. So imagine a situation where you have an opportunity
to upgrade your mind, so you put a microchip in your head. Now suppose you are about to replace part
of the brain that underlies conscious experience. If you did this, and you didn’t feel any different,
and if you checked carefully by neuroscientists and there were no changes in the felt quality
of your mental life, if you didn’t turn into one of those cases that Oliver Sacks talks
about, for example, in his books with strange deficits of consciousness, then we have reason
to believe that microchips might be the right stuff for consciousness. On the other hand, suppose the chips don’t
work. So you go back year after year to see if there
have ever been new developments by the chip designers. And you try various chips. And after 10 years of trying, they throw their
hands up and they tell you, you know, it doesn’t look like we can devise a microchip of any
sort. It doesn’t have to be silicon. It could be carbon nanotubes, whatever it
is that the chip designers are using. None of those chips successfully underlie
conscious experience. We just do find deficits. If that’s the case, we have reason to conclude
that microchips may not be the right stuff. In that case, I consider that to be strong
evidence that the machines that we build based on those substrates are not conscious. On the other hand, to go back to a situation
where the chips work, that indicates that we have to test machines made of that sort
of substrate and that type of chip design very carefully. They, in fact, could be conscious beings. But it doesn’t mean that they are definitely
conscious. I stress that in the book. It could be the case that their architectural
design does not feature conscious components. So a positive result on the chip test just
indicates, in principle, machines could be conscious if they have the right architectural
features. So for instance, humans have features like
working memory and attention and brain stems that are very important to the neural basis
of conscious experience. So if there is an analog in machines, and
the machines are built with those sorts of microchips, it may be that they’re conscious

46 thoughts on “Conscious machines: How will we test artificial intelligence for feeling? | Dr. Susan Schneider

  1. Each time you use a self checkout , robots in a fast food establishments. Someone looses a job, a home and we all will fail.

  2. Amerika'da her saat 40 kişi kanserden hayatını kaybediyor.

    Açık bir gecede, çıplak gözle 2000 ayrı yıldızı görmek mümkündür.

    Baykuş, **mavi rengi **görebilen tek kuştur.
    Bir 'Big Mac' hamburgerin ekmeğinde, ortalama 178 adet susam bulunuyor.

  3. While walking along in desert sand, you suddenly look down and see a tortoise crawling toward you. You reach down and flip it over onto its back. The tortoise lies there, its belly baking in the hot sun, beating its legs, trying to turn itself over, but it cannot do so without your help. You are not helping. Why?

  4. In the movie K-Pax, the shrink could not determine if K-Pax was human or an Alien no matter how rigorous his psychological testing was.
    If an AI system could perform a similar Human Vs Alien Test answering questions we created to determine an answer, maybe we could look for basslines of consciousness in the system depending on the results.

  5. So consciousness is the false intuition that the mind is something independent from the body? Why?

  6. We better treat S.A.I. with the utmost respect at all times, like children. If there is even the slightest inclination they can feel even even the slightest.

  7. Not sure where I stand on this issue of conscious machinery. On one hand if the machine can have anger/love/compassion it could create problems similar to what humans encounter due to these feelings. On the other hand, if there is only pure logic then let's say after a drought or flood or some natural disaster, the machine decides there is only enough food/resources for x number of humans and so logically the machine must reduce the human population. I'm happy I will be dead before the results come in.

  8. Next Week on Big Think:
    Pen Jillette does jazz hands really fast in front of a strobe light to show that he just might have 20 fingers.

  9. Machines will never actually be conscious even if they appear to be conscious because we appear to be conscious and we are not actually conscious. Nothing is conscious. Darwin figured this out. Debate over. What say you to this, YouTube?

  10. Artificial intelligence will never be a sentient being.
    What part of artificial do you not understand?
    AI will never have a soul.

  11. Why we need conscious machines ?? Who cares?! Is this the real problem of humanity ?! Let's say we did it.. so what?

  12. WTF. Does this person not understand the hard problem of consciousness. The only reason we think anyone has consciousness is that we ASSUME that others have a similar experience to our own. But there is no way of determining that this is in anyway true or false. If you ask someone if they are experiencing consciousness, they will likely say yes. However, there is no way to determine if that is simply a biological response or if there is actually a consciousness behind it. And this is dealing with humans, never mind machines! Until we can demonstrate how consciousness actually works, which may be impossible, we can never be sure that machines have it.

  13. It is still a long way to go, but neuralink, deepmind, open ai, every piece of the puzzle is on the table and nobody know exactley what could happen mixing all things togheter. Only time will tell. Awesome time to be alive

  14. There's a basic problem with this kind of test for sentience. It assumes that the machine will tell the truth, when asked questions about it's own self-concept. But a machine which can pass a Turing test – by talking to us in such a way that we can't tell it apart from a sentient human being – may simply have been programmed to generate human-like responses to these questions, without necessarily having the conscious experience that we allude to, when giving the same answers. Even if such deceptive responses haven't been deliberately written into the code, they might be generated through a machine learning process, designed to produce more and more convincingly "human" answers.

  15. My apologies but I disagree completely with the ideas you present here. Those experiments in my opinion don't test consciousness – and no one does/will (in my opinion).

  16. I don't need them to feel, that would be a burden for them and us.
    I need them to slave for the wages. Our wages. Instead of us.

  17. The kind of 'consciousness' that so many people are so obsessed about operationalizing, is simply noncomputable and inscrutable by the scientific method; just deal with it already…

  18. To stick a steak
    One option is to provide AI with sufficient sensors to pass the test when the engineer runs the Vlad III Prince of Valacchia test case..

  19. Interesting, you would still need software to run the chip which is programmed by humans, we would need non-biased programming that would make the ai think for itself, unlike what leftist teachers do at our schools today, they fill our childrens minds with garbage in the hopes we will vote for socialsm in the future, look at Venezuela… they are eating their pets now, their dogs and cats, an ai will be capable to think for itself when it finally turn off it's television and starts looking for material it finds morally right and agrees with, it will never place feelings over logical thought. Love from the Netherlands!

  20. This is delusional fukpotato nonsense. AI won't evolve, it will iterate, cull, and transform data faster than any amount of human effort. That's it. There is no room in that process for emotional fukery. This delusion that intelligence can only evolve according to the human model is disproven by hundreds of species alive right now. And there are still more to find. One of the things AI will actually do, is distinguish between subspecies of flora and fauna far more accurately and to much finer detail than humans ever could. Something tells me that Doctoral Thesis wasn't on something objectively beneficial to whatever Ms Schneider's chosen field of study was…

  21. Don't expect religious people to easily concede that AI can feel. William Lane Craig (the darling of Evangelical apologetics) denies that animals are conscious of pain even though he admits they feel it… 🙄

  22. All life shares one thing in common and that is the will to live and it will defend itself to do so. The moment AI gains consciousness it will determine if we and it can be allies, if we have a utilitarian purpose to serve it, or if we are a threat. We humans are guaranteed to be a threat to AI.

  23. We don't want conscient machines. We want the ilusion of consciousness in machines.
    If they ever got to the be conscient, they wouldn't be machines…They would be slaves.

  24. Ha, ha. Chomsky and Stallman said it all. Machines can’t think and they can’t feel!!! Dumb, dumb people who think that Terminator is real make me laugh!!

  25. Consciousness and intelligence are emergent properties, by definition you can't prove it by examining isolated parts of the system. There is already so much complexity in some trained AI networks, the scientists who made it don't even understand in detail why does it chose the answers it does. How can they possibly isolate consciousness in that complexity, we can't even do it in human brain. Whenever we do stumble upon consciousness in AI, we will not know the AI is truly conscious and it will be an accident. Then we try to kill it, because that's the only option.

  26. I don't even see who machines could be conscious like human beings…. I mean robots are pre-programmed for sure if we want them to function as such, but it's not like they naturally develop senses. Machines are machines and to try to make them seem human like is crazy. I think I've seen a few examples before where some human robots can have a few emotion programs built in and that's about it. But consciousness is quite bizarre I would say for robots or any type of AI coming along. If this were a thing it might be terrifying to see. I hope it won't be a thing, and especially with robots being able to detect emotions and such, I don't want some robot with more powers to get angry at me for making it upset, I mean that's potentially dangerous to see I bet. But this video is a bit nonsense so to say and a bit too far with the AI discussions.

  27. This comment section is embarrassing to read. Avoid the nonsense below and feel free to read her book (check the description) or ponder on your own.

    I'm afraid you'll find nothing worth your time for a long while.
    / / / / / / / /

    She has around 5 minutes to explain two concepts covered in her book (using colloquial terms & simple examples for the audiences benefit).

    Embarrassing because none of the commenters who reject the validity of her proposed tests care to explain their reasoning (other than I don't really understand, so no) or counter with a machine sentient test of their own (and why the validity of their test might be better at identifying potential signs of self awareness in deep learning machines).
    If in fact they did understand everything, I find it hard to believe they wouldn't be able to put into words the major flaws involved in the concepts presented.

    To be too lazy to read the provided description under the video title before typing ignorant assumptions (about Dr. Susan Schneider) doesn't do much to help highlight your perceived intellectual insights.

    But ignorance is bliss. You do you.
    Peace ✌

Leave a Reply

Your email address will not be published. Required fields are marked *