In-Depth: Sen. Rob Portman (R-OH) introduced this bill to direct the Dept. of Homeland Security (DHS) to conduct an annual study of deepfakes and other types of similar content:
“As AI rapidly becomes an intrinsic part of our economy and society, AI-based threats, such as deepfakes have become an increasing threat to our democracy. Addressing the challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy. This bill prepares our country to answer those questions and address concerns by ensuring we have a sound understanding of this issue.”
Original cosponsor Sen. Cory Gardner (R-CO) adds:
“Artificial intelligence presents enormous opportunities for improving the world around us but also poses serious challenges. Deepfakes can be used to manipulate reality and spread misinformation quickly. In an era where we have more information available at our fingertips than ever, we have to be vigilant about making sure that information is reliable and true in whichever form it takes. The United States needs to have a better understanding of how to approach the issues with technologies like deepfakes and this bipartisan legislation is a crucial step in that direction.”
Original cosponsor Sen. Martin Heinrich (D-NM) adds:
“Deepfake technology is an example of how AI can be used in ways that can be damaging to our society and our democracy. Any policy response needs to distinguish carefully between legitimate, protected speech and content that is intended to spread disinformation. This legislation will help increase awareness of deepfake technology and is a necessary first step toward determining how to address this growing threat."
House sponsor Rep. Derek Kilmer (D-WA) says:
“Deepfakes pose a serious threat to our national security, homeland security, and the integrity of our elections. While there is effort underway to counter these videos on social networks and video sites, it is currently being done through a patchwork of policies. Congress should act to ensure that the federal government truly understands the scope of this technology as it takes steps to protect against misinformation.”
The Internet Association supports this bill. Its Vice President and Associate General Counsel, Elizabeth Banker, said in a press release:
“The Deepfakes Report Act of 2019 is an important proactive step. It calls on several agencies with relevant expertise to examine the state of technology, including both the benefits and potential threats presented by deepfakes, and assess countermeasures that can be deployed. Although we have concerns about the broad definition of ‘deepfakes’ outlined in the bill, we hope examination of this technology will help policymakers take an evidenced-based, measured approach.”
Daniel Castro, a top official at the Information Technology and Innovation Foundation (ITIF) think tank, expressed support for this bill in a press release. He added that “deep-fakes present significant new challenges to media consumption by disrupting the traditional notion that ‘seeing is believing.’”
This bill passed the Senate Homeland Security and Governmental Affairs Committee by voice vote with the support of seven bipartisan Senate cosponsors, including four Democrats and three Republicans. Its House companion, sponsored by Rep. Derek Kilmer (D-WA), has three bipartisan House cosponsors, including two Republicans and one Democrat. It’s also endorsed by the Internet Association, Information Technology and Innovation Foundation and CompTIA.
Sen. Ben Sasse (R-NE) has introduced another bill, the Malicious Deep Fake Prohibition Act of 2018, to address deepfakes’ threat. His bill would criminalize the malicious creation and distribution of deepfakes, impose penalties on deepfake creators using others’ likenesses without their consent and require anyone creating a piece of synthetic media imitating a person to disclose that the video was altered or generated.
Of Note: The controversy over deepfakes peaked after a video appeared to show House Speaker Rep. Nancy Pelosi (D-CA) looking drunk. Technically, this video didn’t qualify as a deepfake, since it was slowed down with the audio quality changed, rather than manipulated using AI, it illustrated the danger manipulated videos pose to politicians’ images. While Facebook flagged the video as fake, it didn’t take it down; thus, it went viral and many people initially thought the video was actually true (YouTube pulled the video after it came to its attention).
Attorneys at WilmerHale have written about how disinformation campaigns aided by deepfakes have also harmed businesses by spreading lies about their brands, manipulating stock prices and undermining confidence in emerging technologies through bogus or grossly misleading fake news. They conclude that disinformation’s threat to the private sector “will grow as deepfake-technology increases in sophistication.”
Deepfakes and manipulated videos in general are also a challenge to news media organizations seeking to verify the veracity of video given to them by sources. Politico reports that both Reuters and the Wall Street Journal have embarked on their own efforts to train their journalists to identify doctored videos. The Washington Post has also launched a public-facing “Fact Checker’s Guide to Manipulated Video” in an effort to help voters spot misleading material by classifying videos into three categories: “Missing Context,” “Deceptive Editing,” and “Malicious Transformation,” which includes deepfakes.
Ordinary citizens are also threatened by deepfakes. In late June, the app Deepnudes shut down amid controversy when journalists revealed that users could feed ordinary women’s photos into the app and have it produce naked images of them.
There are ways to spot deepfakes, if one is paying attention: first, check if there’s blurring around the face but not anywhere else in the video. Because current AI technology isn’t fully developed to properly mimic faces, this blurring is a clear sign that a video is fake. Additionally, there may be a change in skin tone around the edge of the face. Second, current deepfake technologies have trouble realistically depicting blinking: so inconsistent blinking is another sign of a deepfake.
The Defense Advanced Research Projects Agency (DARPA) has been working on its “Media Forensics” (MediFor) program for over a year. This program “brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.” According to DARPA, should this program succeed, it’ll “automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”
DARPA funding has helped a team of researchers at the University of California, Riverside create a deep neural network that can identify manipulated images with a high degree of accuracy. One of the researchers, Brian Hosler, explained the technology to Digital Trends:
“Many [previous] deepfake detectors rely on visual quirks in the faked video, like inconsistent lip movement or weird head pose. However, researchers are getting better and better at ironing out these visual cues when creating deepfakes. Our system uses statistical correlations in the pixels of a video to identify the camera that captured it. A deepfake video is unlikely to have the same statistical correlations in the fake part of the video as in the real part, and this inconsistency could be used to detect fake content.”
Together, Sens. Gardner, Portman and Heinrich are the co-founders of the Senate Artificial Intelligence Caucus.
Summary by Lorelei Yang(Photo Credit: iStockphoto.com / ANNECORDON)