Deepfake technology was used to scam firm in Hong Kong. (Pexels)News 

SCMP: Global Firm Loses $26 Million in Deepfake Video Call Scam Utilizing YouTube Videos

Hong Kong police reported on Sunday that scammers successfully deceived a multinational corporation into losing approximately $26 million. The fraudsters accomplished this by utilizing deepfake technology to impersonate high-ranking executives, marking one of the city’s initial instances of such a crime.

Law enforcement agencies are scrambling to keep up with generative AI, which experts say holds the potential for disinformation and abuse — like deeply faked photos of people mouthing things they never said.

A company employee in China’s financial center received “video conference calls from someone posing as company executives asking to transfer money to designated bank accounts,” police told AFP.

Police were alerted to the incident on January 29, when about HK$200 million ($26 million) had already been lost through 15 transfers.

“Investigations are still ongoing and no arrests have been made so far,” police said, without disclosing the company’s name.

We are on WhatsApp channels. Click to join.

The victim worked in the finance department, and the scammers pretended to be the company’s UK-based finance director, according to Hong Kong media.

Acting Superintendent Baron Chan said there were several participants in the video conference call, but everyone except the victim was impersonating another person.

“Fraudsters found publicly available video and audio of the impersonated targets via YouTube and then used deep spoofing to imitate their voices… to trick the victim into following their instructions,” Chan told reporters.

The deepfake videos were pre-recorded and did not include dialogue or interaction with the victim, he added.

What to Know About How Lawmakers Handle Deep Fakes Like Taylor Swift’s

(AP Entertainment)

Even before pornographic and violent Deepfake photos of Taylor Swift began to spread widely in recent days, state lawmakers across the United States had been looking for ways to crack down on such intimate images of adults and children.

But in this Taylor-centric era, the issue has gained much more attention as she’s been targeted with deep fakes, computer-generated images that use artificial intelligence to appear real.

Here’s information on what states have done and what they’re considering.

WHERE DEEPPFAKES ARE SHOWN

Artificial intelligence became more mainstream last year than ever before, helping people create increasingly realistic deep fakes. Now they appear online more often in a variety of formats.

There’s pornography — using celebrities like Swift to create fake, compromising images.

There’s music — a song that sounded like Drake and The Weeknd performing together got millions of clicks on streaming services — but it wasn’t those artists. The song was removed from platforms.

And there are political dirty tricks this election year — Just before January’s presidential primary, some New Hampshire voters reported receiving robocalls allegedly from President Joe Biden telling them not to bother voting. The State Prosecutor’s Office is investigating the matter.

But more common is porn that uses the characters of non-famous people, including minors.

WHAT THE STATES HAVE DONE BACK

Deepfake is just one area in the complex world of artificial intelligence that lawmakers are trying to figure out if and how to deal with.

At least 10 states have already enacted deep counterfeiting laws. Tens of new measures are being considered this year in legislation across the country.

Georgia, Hawaii, Texas and Virginia have laws on the books that criminalize non-consensual deep porn.

California and Illinois have given victims the right to sue those who create images using their likeness.

Minnesota and New York do both. Minnesota law also addresses the use of deep fakes in politics.

ARE THERE TECHNICAL SOLUTIONS?

Siwei Lyu, a professor of computer science at the University at Buffalo, said there are several approaches being worked on, none of which are perfect.

One of them is deep counterfeit detection algorithms, which can be used to report deep counterfeits on, for example, social media platforms.

Another — which Lyu said is in development but not yet in widespread use — is to embed codes in content people upload that would indicate if they are reused to create AI.

And a third mechanism would be to require companies that provide AI tools to include digital watermarks to identify content created by their applications.

He said it makes sense to hold these companies accountable for how people use their tools, and companies in turn can enforce user agreements against creating problematic deepfakes.

WHAT SHOULD THE LAW BE?

The model legislation proposed by the American Legislative Exchange Council is about porn, not politics. The conservative and pro-business policy group is pushing states to do two things: criminalize the possession and distribution of deepfakes depicting minors in sexual behavior and allow victims to sue people who distribute deepfakes depicting sexual behavior.

“I would recommend that lawmakers start with a small, prescriptive fix that can solve a concrete problem,” said Jake Morabito, who chairs ALEC’s communications and technology task force. He cautions that lawmakers should not target technology that can be used to create deep counterfeits because it could stall innovation along with important other uses.

Todd Helmus, a behavioral scientist at RAND, a nonpartisan think tank, points out that leaving enforcement to the people who bring the lawsuits isn’t enough. It takes resources to file a lawsuit, he said. And the result may not be worth it. “It’s not worth suing someone who has no money to give you,” he said.

Helmus calls for guardrails for the entire system and says that their functioning will probably require the government’s involvement.

He said OpenAI and other companies whose platforms can be used to produce seemingly realistic content should work to prevent the creation of deep fakes. social media companies should put better systems in place to prevent their proliferation, and there should be legal consequences for those who do anyway.

Jenna Leventoff, a First Amendment lawyer at the ACLU, said that while deep fakes can cause harm, they are also covered by free speech protections, and lawmakers should make sure they don’t overstep existing free speech exceptions, such as defamation, fraud and obscenity, when they try regulate emerging technology.

Last week, White House press secretary Karine Jean-Pierre addressed the issue, saying social media companies should create and enforce their own rules to prevent misinformation and images like Swift’s from spreading.

WHAT HAS BEEN PROPOSED?

A bipartisan group of members of Congress introduced federal legislation in January that would give people ownership of their likeness and voice — and the ability to sue those who use it in a misleading way via deepfakes for any reason.

Most states are considering some form of deepfake legislation in their sessions this year. They are being introduced by Democrats, Republicans and bipartisan coalitions of lawmakers.

Bills gaining traction include ones that would make it a crime to distribute or create sexually explicit images without his consent in GOP-controlled Indiana. It was unanimously approved by Parliament in January.

A similar measure introduced this week in Missouri is called the “Taylor Swift Act.” And another passed the Senate this week in South Dakota, where Attorney General Marty Jackley said some investigations have been turned over to federal authorities because the state lacks the AI-related laws needed to prosecute.

“When you go on someone’s Facebook page, steal their child and put it on pornography, there’s no First Amendment right to do that,” Jackie said.

WHAT CAN A MAN DO?

It can be difficult for anyone with an online presence to prevent falling victim to a deepfake.

But RAND’s Helmus says that people who find they’ve been targeted can request that they be removed from the social media platform where the images are shared; notify the police if they are in an illegal location; tell school or university authorities if the suspected perpetrator is a student; and seek mental health help if necessary.

Related posts

Leave a Comment