What are deepfakes?

deepfakes

Remember the old saying “seeing is believing”? Well, those days are gone.

The rise of artificial intelligence (AI) is now making it difficult to tell whether certain videos are fake or the real deal.

Take this video that purportedly shows President Barack Obama giving an address that includes some controversial opinions and even profanity. Partway through the clip, a split screen reveals those aren’t the words of the former president, but rather of actor (and Obama impersonator) Jordan Peele.

In the video, Peele imitates Obama’s voice with video manipulation to go along with it. It’s what’s become known as a “deepfake” – a video that looks and sounds real but isn’t. The video of Obama was purposefully created by Peele and Buzzfeed to serve as a warning about deepfake technology.

The idea of President Obama cursing during an address might give a chuckle or two, but it’s also raising alarms that more dire consequences could be on the horizon if the technology falls into the wrong hands.

What are deepfakes?

The term “deepfake” has only been around for a couple of years. It’s formed from a combination of deep learning (an AI subset that uses neural networks) and fake.

The videos are created with AI software that use machine learning algorithms to combine and superimpose existing images and video onto source content. PCMag explains: “Deepfakes are part of a category of AI called ‘Generative Adversarial Networks’ or GANS, in which two neural networks compete to create photographs or videos that appear real.”

The result is a video of real people saying or doing fake things. Consider it the next era of fake news.

Initially, deepfakes were glitchy and easily detected. The quality has improved significantly over time, and it’s clear that the videos will keep getting better as technology advances.

What’s more, it’s now easy (and affordable) for anyone to download software to create convincing fake videos. That has many concerned that believable deepfakes will become more widespread, making it harder to distinguish what’s real.

And if the technology’s used maliciously, it could have implications on elections or start social unrest with riots or panic. A terrorist threat or political scandal right before election night could be easily portrayed in a deepfake. The potential damage this technology could unleash is alarming.

Developers content

Check out what we have to say about the newest developments in tech. Read more.

Shallow fakes

Deepfakes aren’t the only form of manipulated media we have to contend with now. “Shallow fakes” are another alarming trend.

While deepfakes are created with sophisticated algorithms, shallow fakes (also known as “cheapfakes”) require just basic video editing. So anyone with a smartphone can create a shallow fake video.

And though deepfakes use advanced technology to put words in someone’s mouth (literally), shallow fakes manipulate actual recordings, blurring the line between what’s real and what’s fake.

Take, for example, the recent video of House Speaker Nancy Pelosi that was altered to appear as though she was slurring her words.

But there wasn’t any fancy technology involved in producing this doctored clip of Pelosi. The footage was simply slowed down. Meaning minimal modification was done to achieve such a potentially damaging effect.

The video was shared widely on Facebook, which refused to take it down because it didn’t violate company policy (though the video eventually disappeared from the platform). The situation raised questions about accountability and the dissemination of false information.

The targets of deepfakes

While it’s mainly celebrities and politicians so far who’ve been the targets of manipulated videos, broader use of the technology could lead to people outside of the public eye becoming victims, particularly as a form of revenge.

That means businesses aren’t immune to being featured in deepfakes, which could affect (or even ruin) a company’s reputation.

Research has found that 78% of consumers think misinformation damages a brand’s reputation.

During a recent hearing of the House Intelligence Committee, Danielle Citron, a law professor at the University of Maryland, warned of the severe threat that deepfakes pose to companies.

“Imagine the night before a company’s Initial Public Offering, a deepfake video appear showing the CEO committing a crime. If the deepfake video is shared widely, the company’s stock prices may falter and a tremendous amount of money may be lost,” she told committee members. “Of course, the video could be debunked in a few days, but by that time the damage has already been done.”

Regulating fabricated media

With the 2020 US election drawing closer, it’s no surprise that deepfakes are worrying Congress.

The House Intelligence Committee recently held a hearing to analyze the security risks of deepfake technology. The committee gathered a panel of experts to prepare a strategy to guide deepfake restrictions from either the government or social media platforms.

“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections. By then it will be too late,” said Rep. Adam Schiff (D-CA).

Judging by reports after the hearing, changes to Section 230 of the Communications Decency Act may be on the way. The law currently stipulates that social media companies aren’t liable for content posted on their platform. Discussions at the hearing involved potentially changing the legislation to give platforms more incentive to intercept fake content.

A few pieces of legislation aimed at combating fake videos have already been introduced. Sen. Ben Sasse (R-NE) has introduced legislation that would make it unlawful for people to “maliciously” create and distribute fake videos. Rep. Yvette Clarke (D-NY) also tabled a bill that would require the creators of deepfakes to disclose fabricated content using permanent digital watermarks.

Trending Topics

More breaking news, trending topics, and industry updates are a click away.

Can anything combat deepfakes?

Preventing deepfakes from being created isn’t possible (at least for now), but that doesn’t mean combating them is a total lost cause.

Numerous companies and researchers have already started developing technology that detects forged media files. In fact, machine learning, the very technology behind deepfakes, is what many companies are using to help detect manipulated media.

For example, researchers at the University of Surrey in the UK have developed technology that proves what is real. The researchers created Archangel, which uses blockchain and machine learning and stores tamper-proof digital fingerprints of video files to verify their authenticity.

And the AI Foundation has developed Reality Defender, a program that runs in the background of other applications to identify potential fake media.

Another detection method revolves around a simple body function we all automatically do: blinking. On average, healthy adults blink around every two to 10 seconds. But because the AI technology that develops deepfakes is dependent on images of people that are available online – and because photographs of people with their eyes shut are rarely published – fake videos are less likely to capture faces that “blink normally.” That means people portrayed in deepfakes blink a lot less compared with real people.

According to researchers at the State University of New York at Albany, the method of examining blinking has a 95% detection rate.

Conclusion

Even without specialized software, there are things you can look for to spot potential deepfakes and avoid spreading misinformation.

According to Jonathan Hui, a writer who specializes in deep learning, once a video is slowed down, the following can help indicate whether the media is fake:

  • Noticeable blurring in the face but nowhere else in the video
  • Change of skin tone by the edge of the face
  • Double eyebrows or double edges of the face
  • A face that is blurry or flicks when partially blocked by an object

The bottom line is that deepfakes have forever changed the way we need to approach video content. It’s unfortunate, but it’s our new reality.

That means we can’t automatically trust a video we see online. And that hypervigilance may end up being our best defense.

* This blog provides general information and discussion about global business payments and related subjects. The content provided in this blog ("Content”), should not be construed as and is not intended to constitute financial, legal or tax advice. You should seek the advice of professionals prior to acting upon any information contained in the Content. All Content is provided strictly “as is” and we make no warranty or representation of any kind regarding the Content.