Battling the Dark Side of AI: The Deepfake Dilemma
/From doctored photos of the late Pope Francis sporting a white puffer jacket to videos of fake CNN newscasters, deepfakes — hyperrealistic synthetic media created using artificial intelligence (AI) — have infiltrated the digital landscape. Deepfakes have emerged as a source of misinformation, disinformation and fraud, manipulating public opinion and damaging reputations.
Corey Chadderton, an internal auditor for the Barbados Water Authority, led an engaging, entertaining session on the far-reaching effect of deepfakes. During his session at the 36th Annual ACFE Global Fraud Conference, he coached attendees on how to identify deepfake content, distinguish it from genuine media, assess the risks of deepfakes and develop strategies to mitigate their impact.
Deepfake Vulnerability and the Liar’s Dividend
Chadderton kicked off his session by focusing on four areas most vulnerable to deepfakes:
Financial institutions are plagued by fake or modified documents, audio and video deepfake fraud, and synthetic identity fraud.
Corporate enterprises are grappling with rapidly evolving deepfake technology, including the emergence of devices that enable bad actors to disguise their voices and have conversations in real time.
Public figures and celebrities are increasingly battling the challenge of deepfake content that depicts them in explicit or compromising situations without their knowledge or consent.
National security must contend with state adversaries or politically motivated individuals aiming to use deepfakes to erode public trust, negatively affect public discourse or sway an election.
To drive home corporate enterprises’ vulnerability to deepfakes, Chadderton cited a 2021 Forbes article about a criminal organization that used real-time voice cloning technology to reproduce a Dubai bank director’s voice. The deepfake fooled a manager at a Hong Kong bank into transferring $35 million to the criminal organization.
The threat deepfakes pose to national security around the globe is significant, as well. “Bad actors now have the ability and the capacity to influence public discourse, and they can also (depending on how far they want to take it) undermine social stability simply by creating divisive or inflammatory content,” Chadderton explained.
On the other end of the spectrum, Chadderton discussed a trend of using the proliferation of deepfakes as a scapegoat. The “liar’s dividend,” a term coined by law professors Bobby Chesney and Danielle Citron, describes an attempt to leverage the saturation of deepfakes in the digital landscape to dismiss the authenticity of legitimate audio, video or image content, especially if it shows proof of criminal conduct. In other words, Chadderton clarified, “Information can come to light of something that someone has actually said or done, but because of all that is happening [with deepfakes], they are able to say, ‘It wasn't me.’”
Focus on the Fundamentals
In addition to leveraging technology against deepfakes, Chadderton stressed the importance of creating a foundation of strong fundamentals in the fight against synthetic media. To bolster an organization’s defenses against deepfakes, he recommended employing these four best practices:
Do not conflate familiarity with authenticity.
Practice the SIFT (stop, investigate, find, trace) Method.
Adopt the three Rs (review, revision, resilience) of deepfake combat.
Espouse training and awareness.
Digital literacy expert Mike Caulfield developed the SIFT Method, which is an evaluation strategy used to help determine whether online content can be trusted as credible or reliable sources of information. In practice, the SIFT Method encourages individuals to:
Stop before reading or sharing an article or video.
Investigate the source of the media.
Find better coverage that may or may not support the original claim.
Trace claims, quotes and media to the original context.
The three Rs strategy involves:
· Reviewing security protocols and standards.
· Revising security protocols and standards.
· Making the security framework resilient and robust.
“The time and attention that we give to security can often feel unnecessary and in some cases perhaps even excessive. But the moment that something happens, it becomes painfully clear that too much just might have been enough,” Chadderton said.
Awareness Equals Preparedness
To defend against the threat of deepfakes, Chadderton encouraged organizations to be intentional about training and increasing awareness among all employees — those on the front lines, managers, shareholders, executives and boards members. To build a robust awareness program, he advised taking these steps:
Analyze real-world failures. Examine not just what happened but why it happened.
Identify patterns. Fraudsters use common tactics to exploit trust and technology.
Bridge the gaps. Ensure policies, training and verification processes evolve ahead of threats.
Cultivate a mindset of skepticism and verification. Awareness isn’t just about knowledge; it’s about instinct.
Chadderton warned, “Silence isn’t safety. Don’t confuse calm with security.” In other words, the absence of deepfake incidents doesn’t mean an organization is secure.
Staying informed and being vigilant are paramount, he said, to mount the best defense against the dark side of AI. He added that companies can build calibrated and well-articulated defense mechanisms that allow them to stay one step ahead by adhering to deepfake fraud prevention best practices and maintaining a strong digital hygiene policy.