Do Not Be Afraid of the Dark Web. Be Aware.
/Artificial intelligence (AI) continues to be a double-edged sword for fraud since AI helps both perpetrate fraud and fight it. Cybersecurity expert Peter Warmka, CFE, CPP, broke down the shaky anti-fraud landscape of today and the future in his presentation, “Unmasking AI-Powered Fraud Tools on the Dark Web: Threats and Countermeasures,” at the 36th Annual ACFE Global Fraud Conference.
“I have a passion [about AI tools],” said Warmka, “Because I have a fear of what I am going to be talking about.”
AI is helping fuel several types of fraud right now, including identity theft, phishing attacks and deepfake fraud. Warmka started his presentation by providing a story of deepfake deception. An assistant, “Jenna,” was deceived by what she thought was the voice of her supervisor, “Rodney,” which led to her transferring money to a fraudster.
The Tools Fraudsters Used
Internet Scraper: These tools can gather information in a matter of minutes.
Caller ID Spoofing: Rodney was not calling Jenna, but his number was showing up on her phone.
Voice Cloning: The voice used to call Jenna was Rodney’s, but it was not him speaking. Because Rodney is a public speaker, his voice could have been taken from a speaking sample online.
AI-Generated Invoices.
When AI-powered fraud tools are mentioned, the term “dark web” usually follows. But what does that mean?
The Web as a Three Layer Iceberg
Surface web: Sites that are indexed, such as Google, Facebook, Instagram and YouTube. These sites only account for a small part of the web.
Deep web: Not indexed, but the information is out there, mostly in the form of company interwebs. This can include research papers, private forums and medical records. You cannot get to these sites unless you are given access. Deep web is most likely the largest part of the web.
Dark web: About as large as the surface web, the dark web includes the web used for illegal trade, illegal activities and more. This was the focus of Warmka’s presentation.
“Don’t go onto the dark web unless you know what you’re doing,” Warmka warned.
Warmka said the dark web started with positive intentions, such as providing anonymous and secure communications, protection for activists and journalists, secure communications specifically for military and government members, and for promoting online privacy. However, the dark web is now also used for darker reasons, providing anonymity to criminals.
The Most Common Uses of the Dark Web
Illicit trade and commerce (a marketplace): Marketplaces are used for drug trafficking, human trafficking, illegal pornography and more.
Cybercrime services: Hacking, ransomware, malware, identity theft and more.
Fraud and financial crimes: Counterfeit currency and forged documents are just a few examples.
Illegal streaming and copyright violation.
Terrorist activity.
Forums: Where criminals exchange information, techniques and tools.
Whistleblowing and free speech.
Anonymous communication.
Ransomware-as-a-service (RAAS) and phishing kits (phishing-as-a-service).
Credential dump database: Warmka says we have all been victimized by this at least once, if not several times.
Fake ID and document generators.
Deepfake creation services.
Botnets: Bots that attackers control remotely and can be rented by the hour or by day.
AI-powered social engineering bots.
Warmka played a recording that reportedly came from a Wells Fargo customer agent talking about fraudulent transactions but was from a voicebot. He said if the audience was not aware beforehand that this was a voicebot, we probably would not know it was one. The voicebot even reassured the customer that the call was legitimate and let them know that the customer can call the Wells Fargo number on the back of their card to verify.
Additional Uses of the Dark Web
Credit card skimmers.
Synthetic identity kits.
SIM swapping services.
AI-generated fake profiles for identity fraud.
Social media scraping tools.
AI-enhanced fake reviews and reputation manipulation.
Fraud tutorials and guides.
AI-powered captcha solvers.
AI-generated investment schemes and scams.
AI-based blackmail tools.
Social media influence campaigns.
“AI deepfakes enhance credibility [of scams],” said Warmka. “Seeing is believing. Hearing is believing.” He elaborated that even people without a technical background can now use AI to commit fraud.
Fraudsters are also using a process called “jailbreaking.” Many apps and systems online have guardrails and regulations in place to protect people’s privacy, but fraudsters are tweaking these programs to exploit vulnerabilities.
“There is an evil twin of ChatGPT on the dark web,” said Warmka. “FraudGPT.” According to a video Warmka presented, FraudGPT uses machine learning to “generate deceptive and misleading content.”
Voice deepfake technology is evolving quickly. Warmka shared, previously, fraudsters needed 60-second voice samples to create voice deepfakes. Now, they only need about five to seven seconds. This is helping fraudsters commit fraud such as grandparent scams.
Thankfully, there is a way to defeat this increasing technology. Warmka said social circles should have a code word or phrase that can be used to verify credibility.
How to Counter These Risks
Public awareness.
Self-regulating social media platforms.
Government regulations.
Dark web threat monitoring tools.
Deepfake detection tools.
Verification tools.