mardi 3 septembre 2019

What is deepfake? Should you be worried?

Deepfake AI facial recognition

Deepfake content is building chaos among people who grew up with the idea that seeing is believing. Photos and videos, once considered undeniable evidence of something happening, are now being questioned by the masses. This is to be expected after finding out the amazingly realistic video of Barack Obama calling President Trump a "total and complete dip****" is nothing but a deepfake creation.

Deepfake content is building chaos among people who grew up with the idea that seeing is believing.

Edgar Cervantes

The tech involved is raising eyebrows and generating questions, which is why we are here to give you the full deepfake rundown of what it is, how it works, and if there is really a reason to worry.


What is deepfake?

Machine Learning Phone

Deepfake is an AI (artificial intelligence) technique which uses machine learning to create or manipulate content. It is often used to create montages or superimpose a person's face on top of another, but its capabilities extend far beyond that. This technology has plenty of other applications. These include manipulating or creating sound, movement, landscapes, animals, and more.


How does deepfake work?

AI Programming Machine Learning

Deepfake content is created through a machine learning technique known as GAN (generative adversarial network). GANs use two neural nets: a generator and a discriminator. These are constantly competing against each other.

The generator will try to create a realistic image, while the discriminator will attempt to determine if it is fake or not. If the generator fools the discriminator, the discriminator uses information gathered to become a better judge. Likewise, if the discriminator determines the generator's image is a fake, the generator will get better at creating a fake image. The never-ending cycle can continue until an image, video, or audio is no longer noticeably fake to human perspective.

More posts about


The origins

The first deepfake videos were obviously porn!

Edgar Cervantes

The first deepfake videos were obviously porn! More specifically, it was common to see celebrity faces superimposed over porn actresses. Nicholas Cage memes were also popular, among other fun inventions.

The word deepfake became synonymous with this technique in 2017, thanks to a Reddit user who went by the name of "deepfakes". The user was joined by others at the now banned r/deepfakes subreddit, where they shared their creations with the world.

The real creator of generative adversarial networks is Ian Goodfellow. Along with his colleagues, he introduced the concept to the University of Montreal in 2014. He then moved on to work for Google, and is currently employed by Apple.


The dangers

AI Automation of Jobs

In the wrong hands, deepfake creation can be used for falsifying much more than silly Nicholas Cage memes.

Edgar Cervantes

While content manipulation is nothing new, it used to require serious skills. You needed a powerful computer and a really good reason (or just too much free time) to make fake content. Deepfake creation software like FakeApp is free, easy to find, and doesn't require much computer power. And because it does all of the work on its own, you don't need to be a skilled editor to make insanely real deepfake media.

This is why the general public, celebrities, political entities, and governments are worrying about the deepfake movement. In the wrong hands, deepfake creation can be used for falsifying much more than silly Nicholas Cage memes. Imagine someone creating fake news or incriminating video evidence. You can add fake and revenge porn to the problem. Things can get messy real quick.

Another reason to worry about deepfake content is that important personalities could also deny past actions. Because deepfake videos seem so real, anyone could claim a real clip is a deepfake.

Also read: The complexities of ethics and AI


Finding a solution

Deepfakes banned from Reddit

While very close to real, a trained eye can still spot a fake video by paying close attention. The concern is that at some point in the future we may not be able to tell the difference.

Twitter, Pornhub, Reddit, and others have unsuccessfully tried to get rid of such content. On the more official side, DARPA (Defense Advanced Research Projects Agency) is working with research institutions and the University of Colorado to create a way to spot deepfakes.

The best we can do to combat deepfake videos is to be more observant and less gullible.

Edgar Cervantes

Until we have software that can spot irregularities in such videos, the best we can do is be more observant and less gullible. Do your research before assuming a video, image, or audio is real. This is something we should have been doing already, anyways.


Popular deepfake content

Jordan Peele joins Buzzfeed to put together this video, which is meant to create consciousness.

Online manipulation expert Claire Wardle shows us how realistic these videos can look by appearing as Adele for the first 30 seconds of the video. This video from The New York Times gives us great insight into the topic.

Watchmojo gives has curated a list of some of the most popular deepfake videos around. It is a fun video with plenty of great examples.


Are you worried about the consequences of deepfake videos? It is certainly a topic we should keep in mind, while also working on finding solutions.



from Android Authority https://ift.tt/2ZIosJ3
via IFTTT

Aucun commentaire:

Enregistrer un commentaire