The new DEEPFAKES Accountability Act in the House — and yes, that’s an acronym — would take steps to criminalize the synthetic media referred to in its name, but its provisions seem too optimistic in the face of the reality of this threat. On the other hand, it also proposes some changes that will help bring the law up to date with the tech.
The bill, proposed by Representative Yvette Clarke (D-NY), it must be said, has the most ridiculous name I’ve encountered: the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act. Amazingly, that acronym (backronym, really) actually makes sense.
It’s intended to stem the potential damage of synthetic media purporting to be authentic, which is rare enough now but soon may be commonplace. With just a few minutes (or even a single frame) of video and voice, a fake version of a person, perhaps a public figure or celebrity, can be created that is convincing enough to fool anyone not looking too closely. And the quality is only getting better.
DEEPFAKES would require anyone creating a piece of synthetic media imitating a person to disclose that the video is altered or generated, using “irremovable digital watermarks, as well as textual descriptions.” Failing to do so will be a crime.
The act also establishes a right on the part of victims of synthetic media to sue the creators and/or otherwise “vindicate their reputations” in court.
Many of our readers will have already spotted the enormous loopholes gaping in this proposed legislation.
First, if a creator of a piece of media is willing to put their name to it and document that it is fake, those are almost certainly not the creators or the media we need to worry about. Jordan Peele is the least of our worries (and in fact the subject of many of our hopes). Requiring satirists and YouTubers to document their modified or generated media seems only to assign paperwork to people already acting legally and with no harmful intentions.
Second, watermark and metadata-based markers are usually trivial to remove. Text can be cropped, logos removed (via more smart algorithms), and even a sophisticated whole-frame watermark might be eliminated simply by being re-encoded for distribution on Instagram or YouTube. Metadata and documentation are often stripped or otherwise made inaccessible. And the inevitable reposters seem to have no responsibility to keep that data intact, either — so as soon as this piece of media leaves the home of its creator, it is out of their control and very soon will no longer be in compliance with the law.
Third, it’s far more likely that truly damaging synthetic media will be created with an eye to anonymity and distributed by secondary methods. The law here is akin to asking bootleggers to mark their barrels with their contact information. No malicious actor will even attempt to mark their work as an “official” fake.
That said, just because these rules are unlikely to prevent people from creating and distributing damaging synthetic media — what the bill calls “advanced technological false personation records” — that doesn’t mean the law serves no purpose here.
One of the problems with the pace of technology is that it frequently is some distance ahead of the law, not just in spirit but in letter. With something like revenge porn or cyberbullying, there’s often literally no legal recourse because these are unprecedented behaviors that may not fit neatly under any specific criminal code. A law like this, flawed as it is, defines the criminal behavior and puts it on the books, so it’s clear what is and isn’t against the law. So while someone faking a Senator’s face may not voluntarily identify themselves, if they are identified, they can be charged.
To that end a later portion of the law is more relevant and realistic: It seeks to place unauthorized digital recreations of people under the umbrella of unlawful impersonation statutes. Just as it’s variously illegal to pretend you’re someone you’re not, to steal someone’s ID, to pretend you’re a cop, and so on, it would be illegal to nefariously misrepresent someone digitally.
That gives police and the court system a handhold when cases concerning synthetic media begin pouring in. They can say “ah, this falls under statute so and so” rather than arguing about jurisdiction or law and wasting everyone’s time — an incredibly common (and costly) occurrence.
The bill puts someone at the U.S. Attorney’s Office in charge of things like revenge porn (“false intimate depictions”) to coordinate prosecution and so on. Again, these issues are so new that it’s often not even clear who you or your lawyer or your local police are supposed to call.
Lastly the act would create a task force at the Department of Homeland Security that would form the core of government involvement with the practice of creating deep fakes, and any countermeasures created to combat them. The task force would collaborate with private sector companies working on their own to prevent synthetic media from gumming up their gears (Facebook has just had a taste), and report regularly on the state of things.
It’s a start, anyway — rare it is that the government acknowledges something is a problem and attempts to mitigate it before that thing is truly a problem. Such attempts are usually put down as nanny state policies, alas, so we wait for a few people to have their lives ruined then get to work with hindsight. So while the DEEPFAKES Accountability Act would not, I feel, create much in the way of accountability for the malicious actors most likely to cause problems, it does begin to set a legal foundation for victims and law enforcement to fight against those actors.
You can track the progress of the bill (H.R. 3230 in the 116th Congress) here, and read the full text below.
Source: Tech Crunch