ARN

The deepfake danger: When it wasn’t you on that Zoom call

Deepfakes pose a real threat to security and risk management and it’s only going to get worse as the technology develops and bad actors can access malicious offerings such as deepfakes as-a-service.

In August, Patrick Hillman, chief communications officer of blockchain ecosystem Binance, knew something was off when he was scrolling through his full inbox and found six messages from clients about recent video calls with investors in which he had allegedly participated. 

“Thanks for the investment opportunity,” one of them said. “I have some concerns about your investment advice,” another wrote. Others complained the video quality wasn’t very good, and one even asked outright: “Can you confirm the Zoom call we had on Thursday was you?”

With a sinking feeling in his stomach, Hillman realised that someone had deepfaked his image and voice well enough to hold 20-minute “investment” Zoom calls trying to convince his company’s clients to turn over their Bitcoin for scammy investments. 

“The clients I was able to connect with shared with me links to faked LinkedIn and Telegram profiles claiming to be me inviting them to various meetings to talk about different listing opportunities," he says.

"Then the criminals used a convincing-looking holograph of me in Zoom calls to try and scam several representatives of legitimate cryptocurrency projects."

As the world’s largest crypto exchange with $25 billion in volume at the time of this writing, Binance deals with its share of fake investment frauds that try to capitalise on its brand and steal people’s crypto. “This was a first for us,” Hillman says. “I see it as a harbinger of what we think is the future of AI-generated deepfakes used in business scams, but it is already here.”

The scam is so novel that if it weren’t for astute investors detecting oddities and latency in the videos Hillman may have never known about these deepfake video calls, despite the company’s heavy investments in security talent and technologies.

Deepfakes as-a-service

With AI-generated deepfakes getting easier to produce, they are already being used to social engineer trained employees and bypass security controls. 

The misuse of deepfakes to commit fraud, extortion, scams, and child exploitation is enough of a risk for businesses and the public that the Department of Homeland Security (DHS) recently issued a 40-page report on deepfakes. 

It details how deepfakes are created by composites of images and voices culled from online sources and it also offers opportunities to mitigate deepfakes at the intent, research, creation, and dissemination stages of an attack.

“We’re already seeing deepfakes as-a-service on the dark web, just like we see ransomware as a service used in extortion techniques, because deepfakes are incredibly effective in social engineering,” says Derek Manky, chief security strategist and VP of global threat intelligence at Fortinet’s FortiGuard Labs.

“For example, leveraging deepfakes is popular in BEC [business email compromise] scams to effectively convince somebody to send funds to a fake address, especially if they think it’s an instruction from a CFO.”

Whaling for executives, BEC scams, and other forms of phishing and farming represent the first phase of type of attacks against businesses. For example, in 2019, scammers using a deepfake of a corporate CEO’s voice marked as urgent convinced a division chief to wire $243,000 to a “Hungarian supplier.” 

But many experts see deepfakes as part of future malware packages, including in ransomware and biometrics subversion.

Retooling needed to spot deepfakes

Beyond convincing company executives to send money, deepfakes also present unique challenges to voice authentication frequently used by banks today, along with other biometrics, says Lou Steinberg, former CTO of Ameritrade. 

After Ameritrade, Steinberg went on to found cyber research lab CTM Insights, to tackle problems like data integrity weaknesses that allow deepfakes to bypass security controls. He came to the realisation that biometrics are just another form of data that criminals can manipulate after a demonstration with Israeli researchers.

“We saw Israeli researchers replacing images in a CT scanner to hide or add cancer into the scan images, and we realised this could be used in ransom situations when the bad guys say, ‘We’ll only show you the real results of your real CT scan if you pay us X amount of dollars,’” Steinberg says. 

As such, he says, there needs to be more focus on data integrity. “Deepfakes are AI-generated, and traditional signature technology can’t keep up because it only takes a little tweak of the image to change the signature.”

Knowing that traditional security controls won’t protect consumers and businesses from deepfakes, Adobe launched the Content Authenticity Initiative (CAI) to address the issue of content integrity in image and audio down to the developer level. 

CAI members have drafted open standards to develop manifests at the point of image capture (for example from the digital camera taking the picture) so viewers and security tools can verify authenticity of an image. 

The initiative has more than 700 supporting companies, many of them media providers including USA Today, Gannett News, Getty Images, along with stock image providers and imaging products companies such as Nikon.

“The issue of deepfakes is important enough that the Adobe CEO is pressing for authentication of content behind the image and audio files,” explains Brian Reed, former Gartner analyst who is now advisor at Lionfish Technology Advisors.

"This is one example of how protecting against deepfakes will require a new set of countermeasures and context, including deep learning, AI, and other techniques to decipher if something is real or not."

He also points the Deep Fakes Passport Act introduced as Senate Bill HR 5532, which seeks funding for deepfake competitions in order to foster mitigating controls against them.

Steinberg suggests taking a cue from the financial industry, in which fraud detection is beginning to focus more on what a person is asking a system to do rather than just trying to prove who the person is on the other end of the transaction request. 

“We are over-focused on authentication and under-focused on authorisation, which comes down to intent,” he explains. “If you are not authorised to wire millions to an unknown entity in a third world, that transaction should be automatically rejected and reported, with or without the use of biometric authentication.”

Faking biometric authentication

Proving the “who” in a transaction is also problematic if attackers turn deepfakes against biometrics controls, he continues. Biometrics images and hashes, he says, are also data that can be manipulated with AI-driven deepfake technology that can match the characteristics the biometric scanners authenticate against, such as points on a face and an iris, or loops on a fingerprint. 

Using AI to identify AI-generated images is a start, but most matching technologies are not granular enough, or they’re so granular that scanning a single image is onerous.

Brand protection company Allure Security scales CTM’s AI-driven micro-matching technology to identify changes against its database of tens of thousands of original brand images and scanning 100 million pages on a daily basis, says Josh Shaul, CEO of Allure. 

“To identify deepfakes designed to bypass analysis and detection, we are using AI against AI,” he explains. “We can grow the same technology to detect fake images, profile pictures, online video and Web3 spots. For example, we recently looked at some impersonation in a Metaverse land purchase opportunity.”

Hillman also urges businesses to update their training and awareness, both internally for employees and executives, and externally for clients. 

“The idea of whether deepfakes are going to be a problem is no longer a question of if but when and I don’t think businesses have a playbook about how to defend against deepfake attacks,” he predicts. 

“Use your outreach channels to educate. Perform external audits on executives to see who has content out there that makes them susceptible. Audit your controls. And be prepared with crisis management.”