Deepfakes: What happens when deep learning hits social media |

When we talk about Deep Fakes it’s now
not just putting a picture of somebody’s face on a video. It could be recreating their voice these artificial intelligence guys have trapped me in a machine Now you can do whole bodies dancing so I could be a ballerina and this is… I don’t know why you’re laughing I am a born athlete! So Deep Fakes is actually a group of techniques that exists currently. Originally Deep Fakes, when it first came about early January last year was one particular architecture or one specific type of neural network developed by a guy on reddit. “It was— I– this was, this was just over this was very truly surprising for me.” So at the time when we said Deep Fakes that was just one type of neural network
but he opened sourced his neural network he open sourced his architecture and his
code so anyone could use this technique to create their own synthesized videos. So here’s an extremely simplified idea of what a neural network means We first have to think about traditional coding. In its most boiled-down form code is
simple conditional instructions. If it’s cold wear a hat, if it isn’t don’t. Programs are thousands upon thousands of these instructions and instructions
within instructions and this works for manageable environments but when you have petabytes of information to analyze every second like big tech companies it becomes impossible to handle. So deep learning uses the idea of neurons in the brain and each neuron just models a very simple function but when you add them
all together in these complex structures what you’re able to do is model really
difficult things. So in the area of computer vision it’s completely revolutionized it. Neural networks make it possible to tackle problems humans can’t code for. It processes information by layers of smaller filters called neurons these neurons are all interconnected, with the information passing from one layer to the next And what’s happening in those
individual neurons? We don’t actually know. To explain let’s show an example. Say you need a network to recognize images of Nicolas Cage, you give it thousands of correct and false images and tell it which is which, the network then attempts to find connections between what makes wrong images wrong and right images right. In a GAN you have two networks training
against each other You have one that generates the image and the other acts as the critic. So let’s return to our Cage example and we now have a network that can recognize photos of him We then take a second network, give it random inputs, and tell it to output images the original ‘critic’ network will then assess these images and determine whether or not they are Nicolas Cage what’s important is that the critic is then telling the generator what aspects were missing or false and how it can improve. What they worked out very quickly, or the
open source community, was that you can get much more realistic images using this GAN training. Again originally you’d need lots of images to be fed into the neural network to create a realistic video. Now the guys from Samsung have been able to recreate this technique just using a single image to recreate a realistic video and that was released about a month ago. You’ve got pictures of Mona Lisa coming to life out of the frame. So here at Cambridge Consultants
we do not only product development we also do Technology Strategy so different clients come to us and ask how is this technology gonna impact our
market and, as you know, there’s a lot of talk at the moment with social media
platforms. Deep Fakes is increasingly a very important issue. We are seeing a lot more deep fakes on the internet. What does it feel like to take on the role and power of a tech giant? What does it feel like to have access to seemingly infinite knowledge and data of others? People you will never meet or never know. And what would you be willing to do with that data? In many ways the price we have to pay to learn from the gods of Silicon Valley is our privacy. The first time I encountered Deep Fakes is maybe last year. There was some interesting development work going on on the reddit forums and then a few of these forms of AI synthesized video filtered through into popular culture and I was becoming increasingly aware, even back then, that this new form of kind of
Computation Propaganda It can be used for political purposes. Let me tell you a secret. You ever wonder why I’m so popular? Is it because of my big brain? Maybe But seriously it’s all about two things, okay? Algorithms and data. We chose celebrities from three different areas We chose influences from the world of art, as this is a contemporary art installation, We showed influencers from the field of politics, So we have famous politicians on there and then we also wanted to look at the influences from
the tech community as well. Imagine this for a second, one man with total control of billions of people stolen data all their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future. So we have Mark Zuckerberg talking about power and control of data and lack of privacy. We have other kind of celebrity influences and famous artists like Freddie Mercury and Marina Abramovic talking about the
philosophical issues around the data and technology and power. So they’re situated within this much broader conversation that these AI Deep Fakes have as part of the Spectre installation. Well it’s how it will affect content moderation Because as these fakes are becoming more advanced they [companies] want to know what’s going to change with Deep Fakes and they want to know what can be done
hypothetically to moderate this. So iProov’s objective is to create trust by
verifying and securing people’s identities. We do that using face verification. The real threat now comes from this modern technology, Deep Fake technology. We use the screen of the device to illuminate the users face in a rapidly changing sequence of colours and from there we look at how the different colored lights are reflecting off the person’s face. That gives us an enormous amount of information as to whether we looking at a real, physically present,
three-dimensional, human face shaped object or something else Including a piece of synthetic video and it’s precisely our use of the flashing
screens and the controlled illumination that helps us to defend users against these future attacks from Deep Fake videos. The project we’re working on with the Home Office is part of the EU Settled Status app. 750,000 people applied for Settled Status and the application process involves them iProoving themselves To ensure that they are a genuine person making the application. We’ve been tested very very rigorously by quite a large number of organizations so for example the US Department of Homeland Security, before they gave us the contract they tested every mobile face verification system on the market in the world and we were the only one they were unable to break. So, once again, AI is actually very good for building classifiers to try and detect Deep Fakes and because the identifiers or the features of Deep Fakes is changing over time, they’re getting
better and better. A year ago it was quite obvious what
a Deep Fake was, because perhaps it wasn’t properly blended across the edge
of a face, whereas the most recent ones… There’s a few telltale signs but
it’s getting increasingly hard even for an expert to be able to identify them. But, what is going on are teams researching, in order to identify Deep Fakes So they train a neural network using Deep Fakes and true videos and they train the classifier to correctly identify the synthesized videos and the true videos. So it trains the AI to be able to detect false videos and there’s a lot of works going on with that at the moment. I believe most of the content platforms are researching this area. They use third-party fact checkers to judge whether content is real or fake It’s really interesting as I had a call with
a spokesperson from Facebook’s communications team and they told me that their third-party fact checkers had marked it as art and labeled it as satire But even though it was marked as art and satire, and as a result there should be no downranking and removal of the posts from feeds, because it’s not flagged as misinformation, It was still not visible on search feeds. Well the post only became visible five days after all of the press interest So it seems to be that there has been a removal of these posts from public feeds and a suppression which goes in contrast with what their policy was saying and what their third-party fact checkers are saying. So it’s a hard balance, especially
online…. We have to allow political satire and some of these Deep Fakes fall into
this category it’s an aspect of free speech that is important but where is the line between satire and inappropriately representing a politician that could say alter the course of, say… the US primaries and I think that that those nuances are going to become increasingly hard to make a decision on. I mean these spaces, these surfaces like Facebook, Instagram, YouTube they present themselves as public spaces Where the principles and ideals around free expression and creativity are enshrined and protected But actually when you start to dig deeper down into it now and what’s become apparent in regards to the way that our artworks have been treated in this is that their algorithms are
tweaked and adjusted and accounts can be marked and posts can be flagged, even if it’s even if they don’t contravene their policies and we have really no way
of knowing It has been really interesting to see their response and the way their algorithms pick it up and then also the way their algorithms tries to subdue it

2 thoughts on “Deepfakes: What happens when deep learning hits social media |

Leave a Reply

Your email address will not be published. Required fields are marked *