[ad_1]
Scammers tricked a multinational firm out of some $26 million by impersonating senior executives using deepfake technology, Hong Kong police said Sunday, in one of the first cases of its kind in the city.
Law enforcement agencies are scrambling to keep up with generative artificial intelligence, which experts say holds potential for disinformation and misuse — such as deepfake images showing people mouthing things they never said.
A company employee in the Chinese finance hub received “video conference calls from someone posing as senior officers of the company requesting to transfer money to designated bank accounts”, police told AFP.
Police received a report of the incident on January 29, at which point some HK$200 million ($26 million) had already been lost via 15 transfers.
“Investigations are still ongoing and no arrest has been made so far,” police said, without disclosing the company’s name.
We are on WhatsApp Channels. Click to join.
The victim was working in the finance department, and the scammers pretended to be the firm’s UK-based chief financial officer, according to Hong Kong media reports.
Acting Senior Superintendent Baron Chan said the video conference call involved multiple participants, but all except the victim were impersonated.
“Scammers found publicly available video and audio of the impersonation targets via YouTube, then used deepfake technology to emulate their voices… to lure the victim to follow their instructions,” Chan told reporters.
The deepfake videos were pre-recorded and did not involve dialogue or interaction with the victim, he added.
What to know about how lawmakers are addressing deepfakes like the ones that victimized Taylor Swift
(AP Entertainment)
Even before pornographic and violent deepfake images of Taylor Swift began widely circulating in the past few days, state lawmakers across the U.S. had been searching for ways to quash such nonconsensual images of both adults and children.
But in this Taylor-centric era, the problem has been getting a lot more attention since she was targeted through deepfakes, the computer-generated images using artificial intelligence to seem real.
Here are things to know about what states have done and what they are considering.
WHERE DEEPFAKES SHOW UP
Artificial intelligence hit the mainstream last year like never before, enabling people to create ever-more realistic deepfakes. Now they’re appearing online more often, in several forms.
There’s pornography — taking advantage of celebrities like Swift to create fake compromising images.
There’s music — A song that sounded like Drake and The Weeknd performing together got millions of clicks on streaming services — but it was not those artists. The song was removed from platforms.
And there are political dirty tricks, this election year — Just before January’s presidential primary, some New Hampshire voters reported receiving robocalls purporting to be from President Joe Biden telling them not to bother casting ballots. The state attorney general’s office is investigating.
But a more common circumstance is porn using the likenesses of non-famous people, including minors.
WHAT STATES HAVE DONE SO FAR
Deepfakes are just one area in the complicated realm of AI that lawmakers are trying to figure out whether and how to handle.
At least 10 states have enacted deepfake-related laws already. Scores of more measures are under consideration this year in legislatures across the country.
Georgia, Hawaii, Texas and Virginia have laws on the books that criminalize nonconsensual deepfake porn.
California and Illinois have given victims the right to sue those who create images using their likenesses.
Minnesota and New York do both. Minnesota’s law also targets using deepfakes in politics.
ARE THERE TECH SOLUTIONS?
University at Buffalo computer science professor Siwei Lyu said work is being done on several approaches, none of them perfect.
One is deepfake detection algorithms, which can be used to flag deepfakes on places like social media platforms.
Another — which Lyu said is in development but not yet being used widely — is to embed codes in content people upload that would signal if they’re reused in AI creation.
And a third mechanism would be to require companies offering AI tools to include digital watermarks to identify content generated with their applications.
He said it makes sense to hold those companies accountable for how people use their tools, and companies in turn can enforce user agreements against creating problematic deepfakes.
WHAT SHOULD BE IN A LAW?
Model legislation proposed by the American Legislative Exchange Council addresses porn, not politics. The conservative and pro-business policy group is encouraging states to do two things: Criminalize possession and distribution of deepfakes portraying minors in sex acts, and allow victims to sue people who distribute nonconsensual deepfakes showing sexual conduct.
“I would recommend to lawmakers to start with a small, prescriptive fix that can solve a tangible problem,” said Jake Morabito, who directs the communications and technology task force for ALEC. He warns that lawmakers should not target the technology that can be used to create deepfakes, as that could shut down innovation with important other uses.
Todd Helmus, a behavioral scientist at RAND, a nonpartisan thinktank, points out that leaving enforcement up to individuals filing lawsuits is insufficient. It takes resources to sue, he said. And the result might not be worth it. “It’s not worth suing somebody that doesn’t have any money to give you,” he said.
Helmus calls for guardrails throughout the system and says making them work probably requires government involvement.
He said OpenAI and other companies whose platforms can be used to generate seemingly realistic content should make efforts to prevent deepfakes from being created; social media companies should implement better systems to keep them from proliferating, and there should be legal consequences those who do it anyway.
Jenna Leventoff, a First Amendment lawyer at the ACLU, said that while deepfakes can cause harm, free speech protections also apply to them, and lawmakers should make sure they don’t go beyond existing exceptions to free speech, such as defamation, fraud and obscenity, when they try to regulate the emerging technology.
Last week, White House press secretary Karine Jean-Pierre addressed the issue, saying social media companies should create and enforce their own rules to prevent the spread of misinformation and images like the ones of Swift.
WHAT’S BEING PROPOSED?
A bipartisan group of members of Congress in January introduced federal legislation that would give people a property right to their own likeness and voice — and the ability to sue those who use it in a misleading way through a deepfake for whatever reason.
Most states are considering some kind of deepfake legislation in their sessions this year. They’re being introduced by Democrats, Republicans and bipartisan coalitions of lawmakers.
The bills getting traction include one that would make it a crime to distribute or create sexually explicit depictions of a person without their consent in GOP-dominated Indiana. It passed in the House unanimously in January.
A similar measure introduced this week in Missouri is named “The Taylor Swift Act.” And another one cleared the Senate this week in South Dakota, where Attorney General Marty Jackley said some investigations have been handed over to federal officials because the state does not have the AI-related laws needed to file charges.
“When you go into somebody’s Facebook page, you steal their child and you put that into pornography, there’s no First Amendment right to do that,” Jackley said.
WHAT CAN A PERSON DO?
For anyone with an online presence, it can be hard to prevent being a deepfake victim.
But RAND’s Helmus says that people who find they have been targeted can ask a social media platform where images are shared to remove them; inform the police if they’re in a place with a law; tell school or university officials if the alleged perpetrator is a student; and seek mental health help as needed.
[ad_2]
Source link