[ad_1]
Images of former President Donald Trump hugging and kissing Dr. Anthony Fauci, his ex-chief medical adviser. Pornographic depictions of Hollywood actresses and internet influencers. A photo of an explosion at the Pentagon.
All were found to be “deepfakes,” highly realistic audio and visual content created with rapidly advancing artificial intelligence technology.
Those harmed by the digital forgeries—especially women featured in sexually explicit deepfakes without consent—have few options for legal recourse, and lawmakers across the country are now scrambling to fill that gap.
“An honestly presented pornographic deepfake was not necessarily a violation of any existing law,” said Matthew Kugler, a law professor at Northwestern University who supported an anti-deepfake bill in Illinois that’s currently pending before the governor.
“You are taking something that is public, your face, and something that is from another person entirely, so under many current statutes and torts, there wasn’t an obvious way to sue people for that,” he said.
The recent interest in the powers of generative AI has already spurred multiple congressional hearings and proposals this year to regulate the burgeoning technology. But with the federal government deadlocked, state legislatures have been quicker to advance laws that aim to tackle the immediate harms of AI.
Nine states have enacted laws that regulate deepfakes, mostly in the context of pornography and elections influence, and at least four other states have bills at various stages of the legislative process.
California, Texas, and Virginia were the first states to enact deepfake legislation back in 2019, before the current frenzy over AI. Minnesota most recently enacted a deepfake law in May, and a similar bill in Illinois awaits the governor’s signature.
“People often talk about the slow, glacial pace of lawmaking, and this is an area where that really isn’t the case,” said Matthew Ferraro, an attorney at WilmerHale LLP who has been tracking deepfake laws.
Tech Driving the Law
The term “deepfakes” first appeared on the internet in 2017 when a Reddit user with that name began posting fake porn videos that used AI algorithms to digitally add a celebrity’s face to real adult videos without consent.
Earlier this year, the spread of nonconsensual pornographic deepfakes sparked controversy in the video game streaming community, highlighting some of the immense harms of unfettered deepfakes and the lack of legal remedies. The popular streamer QTCinderella, who said she was harassed by internet users sending her the images, had threatened to sue the people behind the deepfakes but was later told by attorneys that she didn’t have a case.
The number of deepfakes circulating on the internet has exploded since then. Deeptrace Labs, a service that identifies deepfakes, released a widely-read report in 2019 that identified close to 15,000 deepfake videos online, of which 96% were pornographic content featuring women. Sensity AI, which also detects deepfakes, said deepfake videos have grown exponentially since 2018.
“The technology continues to get better so that it’s very difficult, unless you’re a digital forensic expert, to tell whether something is fake or not,” said Rebecca Delfino a law professor at Loyola Marymount University who researches deepfakes.
That’s only added to the spread of misinformation online and in political campaigns. An attack ad from GOP presidential candidate Ron DeSantis appeared to show Trump embracing Fauci in an array of photos, but some of the images had been generated by AI.
A fake but realistic photo that began circulating on Twitter in May showed an explosion at the Pentagon, resulting in a temporary drop in the stock market.
In some sense, synthetic media has been around for decades with basic photo manipulation techniques and more recently with programs like Photoshop. But the ease with which non-technical internet users can now create highly realistic digital forgeries has driven the push for new laws.
“It’s this speed, scale, believability, access of this technology that has all sort of combined to create this witch’s brew,” Ferraro said.
Finding Remedies
Without a specific law addressing pornographic deepfakes, victims have limited legal options. A hodgepodge of intellectual property, privacy, and defamation laws could theoretically allow a victim to sue or obtain justice.
A Los Angeles federal court is currently hearing a right-of-publicity lawsuit from a reality TV celebrity who said he never gave permission to an AI app that allows users to digitally paste their face over his. But right-of-publicity laws, which vary state by state, protect one’s image only when it’s being used for a commercial purpose.
Forty eight states have criminal bans on revenge porn and some have laws against “upskirting,” which involves taking photos of another person’s private parts without consent. A victim could also sue for defamation, but those laws wouldn’t necessarily apply if the deepfake included a disclaimer that it is fake, said Kugler, the Northwestern law professor.
Caroline Ford, an attorney at Minc Law who specializes in helping victims of revenge porn, said although many victims could get relief under these laws, the statutes weren’t written with deepfakes in mind.
“Having a statute that very clearly shows courts that the legislature is trying to see the great harm here and is trying to remedy that harm is always preferable in these situations,” she said.
State Patchwork
The laws enacted in the states so far have varied in scope.
In Hawaii, Texas, Virginia, and Wyoming, nonconsensual pornographic deepfakes are only a criminal violation, whereas the laws in New York and California only create a private right of action that allows victims to bring civil suits. The recent Minnesota law outlines both criminal and civil penalties.
Finding the right party to sue can be difficult, and local law enforcement aren’t always cooperative, Ford said of the revenge porn cases she’s dealt with. Many of her clients only want the images or videos taken down and don’t have the resources to sue.
The definition of a deepfake also varies among the states. Some like Texas directly reference artificial intelligence while others only include language like “computer generated image” or “digitization.”
Many of those states have simultaneously amended their election codes to prohibit deepfakes in campaign ads within a particular time frame before an election.
Free Speech Concerns
Like most new technologies, deepfakes can be used for harmless reasons: making parodies, reanimating historical figures, or dubbing films, all of which are activities protected by the First Amendment.
Striking a balance that outlaws harmful deepfakes while protecting the legitimate ones isn’t easy. “You’ll see that policymakers are really struggling,” said Delfino, the Loyola law professor.
The ACLU of Illinois initially opposed the state’s pornographic deepfake bill, arguing that although deepfakes can cause real harm, the bill’s sweeping provisions and its immediate takedown clause could “chill or silence vast amounts of protected speech.”
Recent amendments changed the bill to add deepfakes into Illinois’ existing revenge porn statute, which is a “significant improvement,” the organization’s director of communications Ed Yohnka said in an email. “We do continue to have concerns that the language lowers existing legal thresholds,” he said.
Delfino said a deepfake bill introduced in Congress last month may provoke similar worries because its exceptions are limited to matters of “legitimate public concern.”
California’s statute, she noted, contains explicit references to First Amendment protections. If Congress wants to “really take this up with seriousness, they need to do a little more work on that proposal,” she said.
Kugler said the first deepfake laws have mostly targeted nonconsensual pornography because those cases are “low-hanging fruit” when it comes to free speech issues. The emotional distress and harms to dignity and reputation are clear, while the free speech benefits are minimal, he said.
Delfino has long advocated for stronger revenge porn laws and has been following the rise of deepfake pornography since it first gained attention. She said she is glad the renewed interest in AI in general is driving the push for stronger laws.
“Like many things that involve crimes against women and objectification of women and minorities, there is attention brought on them every so often, and then the public sort of moves on,” she said. “But now, people are going back and being re-concerned about deepfake technologies.”
[ad_2]
Source link