Where ChatGPT and Generative AI Helps or Hinders a Fraud Examiner 

ChatGPT is the fastest growing consumer application ever launched after receiving 100 million active users in its first two months. The large language model (LLM) indexes the internet and has processed more than 300 billion words to grow to its current scale, but this tool is just one of many LLMs on the market, including Google Bard, Microsoft’s Copilot, Perplexity AI and many more. 

At the 34th Annual ACFE Global Fraud Conference, attendees were able to learn about the impact of ChatGPT and Generative AI on fraud prevention and analytics from Vince Walden, CFE, CPA, and CEO of KonaAI. Vince was the 2022 CFE of the Year award recipient and regularly authors about similar topics for Fraud Magazine

Most people's interaction with ChatGPT is with unstructured data and use cases, but Vince poses that for accounting and auditing, it can be an unparalleled tool for fraud examination. 

Examples of unstructured use cases include: 

  • Copywriting 

  • Translations 

  • Writing code 

  • Research assistance  

  • Conversational AI/chatbots 

On the other hand, examples of structured use cases specific to accounting and auditing could be: 

  • “Show me my highest risk transactions.” 

  • “Show me an invoice with the word ‘training’ in the description, in Argentina within the last 60 days.” 

  • “Find me similar transactions to <selected transaction>” 

  • “Which employees are at risk of paying a bribe? 

  • “What are my top ten highest risk transactions?” 

Within these examples, it is clear to see that fraud examiners can benefit immediately from the use of LLMs and Generative AI. However, as fraud examiners, Vince reminds us, “It is your job to investigate, it is your job to validate”. Attendees were reminded that technology-assisted review (TAR) is a nice middle ground when implementing the use of different machine learning tools. Vince says that both your interaction with the LLM and data validation is what will further improve its responses and ultimately support you in your end goal.  

A common concern of session attendees when exploring potential use cases of LLMs in their work was how to protect their company’s data when using ChatGPT. On this, Vince explained that there are already tools and companies working to protect sensitive data, but added that a company’s IT team can create an environment where the LLM “lives” so your data does not touch an outside network and potentially risk exposure.  

As it goes with any innovation, and with ChatGPT’s prominence in society and ease of use, fraudsters are exploring its use cases in the way fraud examiners are. For example, you don’t have to be a hacker to write dangerous code with ChatGPT or other options on the open market, and experienced hackers can use it to amplify their already dangerous code.  

So, what can ChatGPT not do? First, ChatGPT cannot be unquestionably trusted in its responses and should always be fact-checked. In many cases, some of which are making the news, ChatGPT has completely fabricated stories, laws, history and statistics. Many people also question if Generative AI is plagiarism, and Vince presumes that there are already legal teams dedicated to finding the answer to that question.  

Whether it’s creating code for hacking or generating new or improved fraud schemes, or for data analysis and fraud examination, we’ll see the implications of Generative AI and its ability to benefit current use cases, good or bad, in the very near future.