Prime Minister missteps, ongoing conflict between the U.S. and Iran, and climate change strikes around the world are big news stories this week.
Look at this video of comedian Bill Hader impersonating former California governor Arnold Schwarzenegger.
It’s not real. This video is a “deepfake.”
Here’s a definition, from The Verge:
… one baseline characteristic is that some part of the editing process is automated using AI techniques, usually deep learning. This is significant, not only because it reflects the fact that deepfakes are new, but that they’re also easy. A big part of the danger of the technology is that, unlike older photo and video editing techniques, it will be more widely accessible to people without great technical skill.
Now, the stakes are fairly low for the Hader video.
But we pay attention when Facebook CEO Mark Zuckerberg and Speaker of the House Nancy Pelosi make comments or issue statements. Both Zuckerberg and Pelosi were recently the subjects of deepfake videos. In Pelosi’s, her speech was altered to make her appear intoxicated.
This is the conundrum of deepfaked videos and photos — in which outside actors modify existing pieces of media to create comments or scenarios that didn’t actually happen.
And they’re good. Deepfakes can be incredibly hard to spot.
Here’s more from The Washington Post about the latest developments on the creation of deepfakes.
Powerful new AI software has effectively democratized the creation of convincing “deepfake” videos, making it easier than ever to fabricate someone appearing to say or do something they didn’t really do, from harmless satires and film tweaks to targeted harassment and deepfake porn.
And researchers fear it’s only a matter of time before the videos are deployed for maximum damage — to sow confusion, fuel doubt or undermine an opponent, potentially on the eve of a White House vote.
“We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”
What happens when the viewer needs to parse whether these statements are real — even when there seems to be video evidence for it?
- Danielle Citron Professor of Law, Boston University School of Law; author, "Hate Crimes in Cyberspace"; @daniellecitron
- Jack Clark Policy Director, OpenAI; helps run the AI Index, an initiative from the Stanford One Hundred Year Study on AI to track and analyze AI progress; @jackclarkSF
- Rachel Thomas Co-founder, Fast.Ai; professor, University of San Francisco Data Institute; @math_rachel
Most Recent Shows
The president threatens action against California. His former campaign manager doesn't cooperate with Congress.
Millennials might be accused of killing a lot of industries. The plant business isn't one of them.
We explore one author's quest for truth in an increasingly fake world.