Gary M. Shiffman, Ph.D., is the author of “The Economics of Violence: How Behavioral Science Can Transform our View of Crime, Insurgency, and Terrorism” (Cambridge University Press, March 2020). He teaches economic science and national security at Georgetown University and is founder and CEO of Giant Oak, the creator of Giant Oak Search Technology.
The Secret Service has estimated that $30 billion of stimulus funds will be stolen. And according to Axios and the Aspen Institute, the pandemic has “unleashed a cybercrime wave,” with the FBI’s Internet Crime Complaint Center receiving over three quarters as many complaints by May of this year as it received in all of 2019.
The fight against these illicit activities requires radical reform. Using audits and the threat of audits to deter financial fraud is not sufficient. We need real-time screening upfront — preventative measures, rather than reactive measures — to effectively prevent financial crime today. We need technology to do this efficiently.
In the decades I have spent studying criminal behaviors, I have seen that people who commit crimes will weigh the possibility of getting caught against the expectation of benefit if they are successful. Under a system that relies on audit — a reactive measure — only a small percentage of fraudulent transactions will be identified, according to a 2020 McKinsey study.
In the aftermath of hurricanes Katrina and Rita in 2005, for example, 16% of the $6.3 billion in relief money distributed to victims was spent improperly. In contrast, during the economic stimulus of 2009 (the American Recovery and Reinvestment Act), “less than 0.2 percent of all reported awards [resulted in] fraud investigations,” according to a White House report.
The difference between these two results (16% fraud vs. 0.2% fraud) is screening. Under the ARRA, the government disbursed funds over a 2-year period, taking the time to screen applicants. Following hurricanes Katrina and Rita, no time existed to perform thorough screening, and technology could barely help.
In 2020, we live in a different world of technology than we did in 2005. Now, years later, we have technologies that can screen and vet individuals in a fraction of the time using machine learning and artificial intelligence. By using AI/ML informed by behavioral science, we can train algorithms to find and reveal patterns of illicit human activity quickly and effectively. We can accomplish the dual mission of getting assistance checks out quickly while deterring fraud.
Industry leaders and entrepreneurs can take three steps to achieve this dual goal. First, we must train algorithms on large and diverse datasets for distinguishing fraud from not-fraud. For any targeted illicit activity, the more good data a machine receives, the more accurate its output will be.
Second, we must ensure that the uniquely human component of analysis and adjudication remains intentional and robust. Technology can make human work easier by removing boring and mundane tasks, but humans make ethical choices and value judgments that machines cannot. We need to allocate scarce human resources to adjudication-type tasks and create a balance between machines and human actors. We can and should use machines to raise awareness of human biases while also empowering and training humans to identify and overcome ML limitations.
Finally, we must share awareness about the frontiers of technology for fraud and financial crime screening so that government agencies understand that this is no longer 2005, and getting money out the door to those with legitimate needs does not require accepting billions of dollars of inevitable fraud and waste. Shame on us if the CARES Act underperforms the Katrina/Rita programs in deterring fraud. In 2020, we must encourage the adoption of screening technologies which make more efficient fraud detection systems.
Properly conceived and deployed, machines can leverage massive amounts of data to combat fraud and cybercrime of all kinds – all while respecting individual privacy and minimizing the effects of biases. If we want to enhance safety and security, we need to reform our institutional approach to finding fraud by utilizing AI/ML to make screening easy.