A woman in Washington, DC, views a manipulated video on that changes what is said by President Donald Trump and former president Barack Obama, illustrating how "deepfake" technology can deceive viewers.

A woman in Washington, DC, views a manipulated video on that changes what is said by President Donald Trump and former president Barack Obama, illustrating how "deepfake" technology can deceive viewers.

Look at this video of comedian Bill Hader impersonating former California governor Arnold Schwarzenegger.

It’s not real. This video is a “deepfake.”

Here’s a definition, from The Verge:

… one baseline characteristic is that some part of the editing process is automated using AI techniques, usually deep learning. This is significant, not only because it reflects the fact that deepfakes are new, but that they’re also easy. A big part of the danger of the technology is that, unlike older photo and video editing techniques, it will be more widely accessible to people without great technical skill.

Now, the stakes are fairly low for the Hader video.

But we pay attention when Facebook CEO Mark Zuckerberg and Speaker of the House Nancy Pelosi make comments or issue statements. Both Zuckerberg and Pelosi were recently the subjects of deepfake videos. In Pelosi’s, her speech was altered to make her appear intoxicated.

This is the conundrum of deepfaked videos and photos — in which outside actors modify existing pieces of media to create comments or scenarios that didn’t actually happen.

And they’re good. Deepfakes can be incredibly hard to spot.

Here’s more from The Washington Post about the latest developments on the creation of deepfakes.

Powerful new AI software has effectively democratized the creation of convincing “deepfake” videos, making it easier than ever to fabricate someone appearing to say or do something they didn’t really do, from harmless satires and film tweaks to targeted harassment and deepfake porn.

And researchers fear it’s only a matter of time before the videos are deployed for maximum damage — to sow confusion, fuel doubt or undermine an opponent, potentially on the eve of a White House vote.

“We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

What happens when the viewer needs to parse whether these statements are real — even when there seems to be video evidence for it?

Guests

  • Danielle Citron Professor of Law, Boston University School of Law; author, "Hate Crimes in Cyberspace"; @daniellecitron
  • Jack Clark Policy Director, OpenAI; helps run the AI Index, an initiative from the Stanford One Hundred Year Study on AI to track and analyze AI progress; @jackclarkSF
  • Rachel Thomas Co-founder, Fast.Ai; professor, University of San Francisco Data Institute; @math_rachel

Topics + Tags

Most Recent Shows

The 1619 Project

Thursday, Aug 22 2019It has been 400 years since over 20 enslaved African people landed in Virginia.