Deepfake Threats and How to Counter Them

“Technology is neither good nor bad,” said spy and human hacker, Peter Warmka, CFE, CPP, at the early morning session on the final main conference day of the 34th Annual ACFE Global Fraud Conference. It depends on how it’s being used. Deepfakes and artificial intelligence (AI), just like other types of technology, can be used for good purposes as well as manipulation and to cause harm. Certified Fraud Examiners (CFEs) can respond by understanding the technology, countering its risks and constant verification. 

During his session “Deepfakes and Artificial Intelligence Threats,” Warmka elaborated on his role as a spy and human hacker, joking that he is not at all like James Bond (he prefers draft beer over martinis), but he did work overseas as a spy for the CIA, and his experiences showed him ways that fraudsters can leverage artificial intelligence to carry out their schemes. 

Warmka explained that deepfakes are a subset of AI. They are synthetic media that can be digitally manipulated so that a person appears to be saying or doing something that they’re not. All it takes is a video sampling of someone speaking, and a deepfake can be created. 

There are some positive benefits to the use of deepfakes. Famous historical figures can be generated so that we can learn about the past and more vividly see a bit of history come to life. Soccer legend David Beckham delivered a PSA that was recreated in multiple languages so that people could hear his voice in their own language. Imagine watching a training video with Morgan Freeman’s voice that could be made (with his permission) without him having to be there physically, or if an actor is not available to finish out a movie series, as in the case with Peter Cushing in the Star Wars films, their character could still appear in a sequel. Even ChatGPT is a beneficial resource with lots of cool applications, said Warmka. 

Deepfakes can also be used for malicious purposes. In geopolitical environments, fake news can be created to carry out deception campaigns. On dark web forums, fraudsters have shared information on how to use AI to commit financial fraud. Through cyber extorsion, they can use deepfakes to create videos of someone committing illicit behavior that they threaten to show to others as a way to extort victims. They can also commit identity theft by using a deepfake for facial recognition to steal money from someone’s bank accounts.  

As fraud fighters, we may have a higher risk of becoming a fraud victim than we think. Technological advances are making it easier for fraudsters to commit crimes. For example, social engineering involves creating fake social media profiles for spear phishing. Many conference attendees likely have a LinkedIn account, but approximately “5% of those profiles are fake,” said Warmka. A profile can be set up in just a few minutes and can obtain hundreds of connections within days. Many users will connect with someone they don’t know personally, especially if they see they have commonalities. 

So, how do you distinguish between real and fake profiles? Do a reverse image search on profile pictures, advises Warmka, but keep in mind AI generated images are one of a kind, and won’t appear in these searches. You can also spot fake profiles by looking at the text. Are there grammatical errors? Does it make sense? A fake profile might be very skimpy and not contain a lot of detail or be overly detailed to overcompensate. But tools like ChatGPT are combatting this and can produce a good-sounding profile. 

Fraudsters can also easily steal personal information by impersonating others through vishing. It only takes a few seconds of voice sampling – the length of a voicemail, for instance – to create a clone. Tactics fraudsters use to collect sensitive information include asking for career advice or claiming to be from the same university. They may even pose as a potential vendor or fellow employee. 

The fact that deepfakes are making it easier for people to commit fraud is alarming, and the potential for deception has some consequences. It can undermine trust in evidence provided in courts, explained Warmka. Someone can dismiss pieces of evidence by claiming it’s a deepfake, or not trust when something is real. 

Fraud fighters can counter the risks of deepfakes by understanding the technology and developing regulations. There are many technology platforms that CFEs can use that are self-regulated. To verify that someone is actually who they say they are, Warmka recommends multifactor authentication. This involves coming up with something that the user knows, has, and is. It’s difficult for fraudsters to fake all of these. Confirmation codes and deepfake detection tools will also help protect information.  

Security awareness is essential. Train employees regularly. “Verify, then trust,” said Warmka. If you trust and then verify, it’s too late.