Jan 29 2020
Software

What Is Deepfake Technology? Here Is How to Spot a Deepfake

False videos created with deep machine learning could disrupt the election process with disinformation.

Using technology to alter an actor’s appearance in a movie is entertaining (and generally done with the actor’s consent), but similar techniques are being used to fabricate versions of real politicians saying and doing things that never happened.

There’s serious concern that false or misleading information based on such video deepfakes will influence the 2020 elections, and experts in government and academia are working to find ways to detect them.

Simple manipulated videos and photos have already been seen in the 2020 campaign. Selective editing with basic software can readily alter or obscure a politician’s meaning; changing or adding something new to a still image is as easy as firing up a photo editing program.

Deepfakes go a step further, using deep machine learning to enhance the accuracy of the resulting video.

How Does a Deepfake Video Work?

“Deepfakes employ two separate sets of algorithms acting in conjunction: the first algorithm creates a video, and the second one tries to determine if the video is real or not,” according to Merriam-Webster’s Words We’re Watching blog.

“If the second algorithm can tell that the video is fake, the first algorithm tries again, having learned from the second algorithm what not to do. And the pair of algorithms go around and around until they spit out a result that the programmers feel is sufficiently real-looking.”

The technique first got attention in 2014 with the publication of a scientific paper describing that process in detail, naming it a “generative adversarial network.” The term “deepfake” originated in 2017 on Reddit, where users were grafting female celebrities’ faces into existing pornographic videos.

Deepfake Examples at the Local Government level

Few deepfakes have been deployed in the political realm. Carefully edited videos, such as one with altered audio that made House Speaker Nancy Pelosi sound as if she was slurring her words, are still the norm, as are altered still images.

A Harvard University senior last fall said he created a “deepfake bot” that posted comments to a federal website collecting public comment on a proposed waiver for Idaho’s Medicaid program; the bot was responsible for more than half the comments on the site.

While Medicaid.gov officials began to block the bots and the student withdrew the other comments, “the comments generated and submitted by the bot were virtually indistinguishable from others written during the public comment period,” he wrote. “Human moderators would have no consistent way of correctly identifying bot comments by hand.”

“Things have changed,” David Doermann, director of the University at Buffalo Artificial Intelligence Institute and a former computer vision program manager at the Defense Advanced Research Projects Agency (DARPA), told a House of Representatives committee last summer. “The process of content creation and media manipulation can be automated. Software can be downloaded for free from online repositories, it can be run on your average desktop computer with a GPU card by a high school student and it can produce personalized, high-quality video and audio overnight.”

MORE FROM STATETECH: Learn how states can add to election security with special sensors.

How State Governments Can Spot a Deepfake Video

How do you know if you’re watching a deepfake? There are tells — “the shape of light and shadows, the angles and blurring of facial features or the softness and weight of clothing and hair,” reports The Washington Post. “But in some cases, a trained video editor can go through the fake to smooth out possible errors, making it that much harder to assess.”

Researchers at the University of California, Berkeley and the University of Southern California developed a method to detect deepfakes, published last year in a paper called “Protecting World Leaders Against Deep Fakes.” They used video of politicians as well as their Saturday Night Live impersonators to create a baseline for observation; minor facial movements such as nose wrinkling or tightening the lips provide a key to whether the video is real or not.

It’s not foolproof, the researchers write in their paper. The more facial features that are included in the baseline, the less accurate the method gets; and it’s only effective in certain contexts, such as when the subject is looking toward the camera.

Social media and tech companies are working with universities to improve detection methods. Facebook, Microsoft and several leading universities have begun a Deepfake Detection Challenge, which has called for development of technologies to detect deepfakes; the challenge ends on March 31. In addition, Google has created a database of faked faces that can support deepfake detection efforts.

Other researchers are developing forensic techniques to spot these videos, using a particular form of machine learning to detect pixel artifacts left over after the alterations and to compare suspected fakes with real videos.

Much of the support for such research is coming from DARPA’s Media Forensics program, an “attempt to level the digital imagery playing field, which currently favors the manipulator,” writes Matt Turek, the program manager for DARPA’s Information Innovation Office.

“The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist,” he adds.

MORE FROM STATETECH: Find out what a vulnerability scanner is and how it can enhance election security.

How States Can Prevent Deepfake Videos from Spreading

Among social media companies, Facebook has announced it will remove deepfakes unless they are clearly satire; Twitter has banned them; and Reddit banned and took down the r/deepfakes subreddit.

In the meantime, some state governments are turning to legislation to attempt to prevent deepfakes from making inroads online. Deepfakes creators in Texas could face misdemeanor charges, a year in jail and a $4,000 fine if they post deepfakes within 30 days of an election.

And last fall, California’s governor approved a new law that forbids “maliciously distributing or creating ‘materially deceptive’ media about any candidate within 60 days of an election.” This law sunsets on Jan. 1, 2023, unless it is updated to remain on the books longer.

Best practices for detection are still in the early phases: “There’s no money to be made out of detecting these things,” Nasir Memon, a professor of computer science and engineering at New York University, told The Washington Post.

John Villasenor, a professor of electrical engineering, public affairs, law and management at UCLA, writes on the Brookings Institution’s TechTank blog that legal measures can be taken against deepfakes, including charges of copyright infringement or defamation, but that they happen after the fact and do not prevent the videos from spreading in the first place.

“As the new Texas measure demonstrates, growing public alarm over the negative impacts of AI-manipulated media is resulting in increasingly aggressive legislative action in this field,” Matthew Ferraro, a visiting fellow at the National Security Institute at George Mason Law School, writes at Law360.

Photo courtesy of Facebook
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT