Fraudulent Deepfake Schemes on the Rise
Criminals are now able to exploit everyday consumers using advanced techniques that were once considered science fiction. For instance, computer-generated children’s voices have become so incredibly realistic that they can deceive even their own parents. Additionally, masks created using social media photos have the ability to bypass face ID security systems.
The spread of the scam technology has alarmed regulators, police and people at the highest levels of the financial industry. In particular, artificial intelligence is being used to “turbocharge” fraud, US Federal Trade Commission Chairwoman Lina Khan warned in June, urging law enforcement to increase vigilance.
Even before AI took off and became available to anyone with an Internet connection, the world struggled to contain exploding financial fraud. In the US alone, consumers lost nearly $8.8 billion last year, a 44% increase over 2021 despite record investments in detection and prevention. Financial crime experts at major banks, including Wells Fargo & Co and Deutsche Bank AG, say the fraud boom on the horizon is one of the biggest threats to their industry. In addition to paying the costs of combating fraud, the financial industry risks losing the faith of burned customers. “It’s an arms race,” says James Roberts, who heads fraud management at the Commonwealth Bank of Australia, the country’s biggest bank. “It would be harsh to say we won.”
The history of scams is certainly as old as the history of commerce and business. One of the earliest known cases, over 2,000 years ago, involved a Greek merchant seafarer who attempted to sink his ship in order to fraudulently claim compensation from an insurance policy. If you look at the archives of any newspaper, you will find countless attempts to separate the gullible from their money. But in the dark economy of fraud – just like the wider economy – there are occasional bursts of destabilizing innovation. The new technology lowers the cost of executing the scam and allows the criminal to reach a larger number of unprepared tracks. The e-mail introduced every computer user in the world to a group of tough princes who needed help to salvage their lost fortunes. Crypto brought with it a boom in Ponzi schemes that went viral on social media.
The AI explosion offers not only new tools but also the possibility of life-changing financial losses. And the increased sophistication and newness of technology means that everyone, not just the gullible, is a potential victim. The Covid-19 lockdown accelerated the adoption of online banking across the world, with phones and laptops replacing face-to-face interactions at bank branches. It has brought benefits in the form of lower costs and increased speed for financial firms and their customers, as well as openings for fraudsters.
Some new technologies go beyond what current off-the-shelf technology can do, and it’s not always easy to tell when you’re dealing with a garden-variety scammer or a nation-state actor. “We’re starting to see a lot more sophistication in terms of cybercrime,” says Amy Hogan-Burney, director of cybersecurity policy and protection at Microsoft Corp.
Globally, the cost of cybercrime, including fraud, will reach $8 trillion this year, surpassing that of Japan, the world’s third largest economy. By 2025, it will reach $10.5 trillion, having more than tripled in ten years, researcher Cybersecurity Ventures estimates.
In the Sydney suburb of Redfern, some of Roberts’ team of more than 500 spend their days eavesdropping on cons to hear firsthand how AI is shaping their fight. A fake request for money from a loved one is not new. But now parents are getting calls that use artificial intelligence to clone their child’s voice so it sounds indistinguishable from the real thing. These tricks, known as social engineering scams, tend to have the highest hit rates and generate the fastest returns for scammers.
Cloning the human voice has become easier than ever. Once a scammer downloads a short sample of a voice clip from someone’s social media or voicemail—it can be as long as 30 seconds—they can use AI voice synthesis tools available online to create the content they need.
Public social media accounts make it easy to find out who a person’s relatives and friends are, not to mention where they live and work and other important information. Bank executives stress that fraudsters, who operate like businesses, are willing to be patient, sometimes planning attacks for months.
According to Rob Pope, director of the New Zealand government’s cyber security agency CERT NZ, what fraud teams have seen so far is just a taste of what AI can do. He notes that AI is simultaneously helping criminals increase the number and customization of their attacks. “It’s a fair bet that in the next two to three years we’re going to see more AI-powered criminal attacks,” says Pope, a former deputy commissioner of the New Zealand Police who oversaw some of the country’s most high-profile crimes. . “Artificial intelligence accelerates the sophistication of these bad people and their ability to turn around very quickly. AI makes it easier for them.”
To give an idea of the challenges banks face, Roberts says the Commonwealth Bank of Australia currently monitors around 85 million transactions a day through a network of monitoring tools. It is in a country with a population of only 26 million.
Industry hopes to fight back by educating consumers about the risks and increasing investment in defense technology. With the new software, CBA detects when customers use a computer mouse in an unusual way during a transaction – a red flag of possible fraud. Anything suspicious, including the destination of the order and how the purchase was processed, can alert staff in just 30 milliseconds, allowing them to prevent the transaction.
Computer engineers at Deutsche Bank have recently rebuilt their suspicious event detection system, called Black Forest, using the latest natural language processing models, according to Thomas Graf, a senior machine learning engineer who works there. The tool looks at transaction criteria such as amount, currency, and destination, and automatically learns from data sets which patterns indicate fraud. The model can be used in both retail and corporate transactions and has already uncovered several cases, including one involving organized crime, money laundering and tax evasion.
Wells Fargo has revamped technical systems to combat the risk of AI-generated video and audio. “We’re training our software and our employees to spot these fakes,” says Chintan Mehta, director of digital technology at Wells Fargo. But the system must constantly evolve to keep up with criminals. Detecting scams, of course, costs money.
One problem for companies: Every time they screw things up, criminals try to find a solution. For example, some US banks require customers to upload a photo ID when opening an account. Fraudsters are now buying stolen data on the dark web, searching for pictures of their victims on social media and 3D printing masks to create fake IDs from the stolen data. “And these can look like anything from a Halloween store to a very lifelike Hollywood-standard silicone mask,” says Alain Meier, chief identity officer at Plaid Inc., which helps banks, financial technology firms and other companies combat fraud. with its recognition software. Plaid analyzes skin texture and translucency to ensure that the person in the photo looks real.
Meier, who has devoted his career to fraud detection, says the best scammers who run their schemes as businesses build scam software and package it up for sale on the dark web. Prices can range from $20 to thousands of dollars. “It could be something like a Chrome extension that helps bypass fingerprinting, or tools that can help create synthetic images,” he says.
As fraud evolves, the question of who is responsible for the losses becomes more and more controversial. For example, in the UK, victims of unknown incidents – for example someone copying and using your credit card – are legally protected against loss. If someone tricks you into paying, the liability is less clear. In July, the country’s highest court ruled that a couple who had been tricked into sending money abroad could not hold their bank liable for merely following their instructions. However, lawmakers and regulators have leeway to set other rules: The government is preparing to require banks to compensate victims of fraud when cash is transferred through Faster Payments, a money transfer system between UK banks. Politicians and consumer advocates in other countries are calling for similar changes, arguing that it is unreasonable to expect people to recognize these increasingly sophisticated scams.
Banks fear that changing the rules would make things easier for fraudsters. Financial leaders around the world are also trying to shift some of the responsibility to technology companies. The fastest growing category of scams is investment fraud, which is often introduced to victims through search engines where scammers can easily buy sponsored advertising space. When potential investors click through, they often find realistic prospectuses and other financial information. Once they’ve transferred their money, it can take months if not years for them to realize they’ve been scammed in an attempt to redeem their “investment”.
In June, a group of 30 lenders in the UK sent a letter to Prime Minister Rishi Sunak asking that tech companies contribute to compensation payments to victims of fraud on their platforms. The government says it is planning new legislation and other measures to combat online financial scams.
The banking industry is lobbying for a wider distribution of responsibility because costs seem to be rising. Once again, a familiar problem from economics also applies to the scam economy. Like pollution from a factory, new technology creates an externality or costs to others. In this case, it’s greater reach and risk of fraud. Neither banks nor consumers want to be the only ones forced to pay the price.
Chris Sheehan spent nearly three decades in the country’s police force before joining National Australia Bank Ltd., where he heads investigations and fraud. He has added about 40 people to his team over the past year with the bank’s continuous investment. When he adds up all the staff and technology costs, “it scares me how big the number is,” he says.
“I’m hopeful because there are technical solutions, but you never completely solve the problem,” he says. It reminds him of his time fighting drug gangs as a police officer. Passing it off as a war on drugs was “a big mistake,” he says. “I never frame it in that frame — a war of deception — because that means the war is winnable,” he says. “This is not winnable.”