At its core, money is information. When the numbers in a person’s bank account go up or down, this means they get richer or poorer.
But this framing requires an assumption to be made, in order for the system to work. Information needs to be agreed upon by all parties. With paper money, the government ensures agreement by printing the value on the face of each banknote. In the digital world, the chain of custody is recorded and preserved for each transaction, to verify the transfer of wealth.
So far, so good. But a challenge arises when this mechanism is abused. How can money be relied upon in an age of disinformation?
Financial fraud takes many forms, including stolen credit card data, identity theft, dishonest attempts to collect on insurance, and many other everyday scams that exploit weak spots related to information security. As digital tools are increasingly put to use by scammers of all kinds, fintech companies are applying the same technology to detect and prevent these types of fraud — with impressive results.
Bringing trust through transparency
Artificial intelligence (AI) can be trained to search datasets for anomalies, such as a sudden change in spending habits, or if a credit card number is used in Bangkok and Paris on the same afternoon. As suspicious activity is detected, it can be flagged and intercepted by the AI system, sending an alert to the credit card issuer, card owner, bank, and/or merchant.
Machine learning (ML) accomplishes the same task, but through a different path. Instead of programmers coding an alert system on a case-by-case basis, ML systems receive a continuous stream of data, including both legitimate and fraudulent transactions. The system trains itself to discover patterns which indicate fraudulent activity. Those patterns are internally checked and refined without human intervention as data comes in, so that they can be applied to new transactions in real time.
Compared to manual software programming, ML is more efficient for the task at hand — but is regarded by many people and institutions as being less trustworthy because it functions as a ‘black box’ whose decision-making processes are inscrutable. No matter how well such a system may work, few people, and even fewer institutions, are inclined to outsource their financial security to an algorithm whose inner workings are a mystery to both its users and its creators.
To address this concern, and satisfy government regulators, software-based fraud detection processes are adding explainability to their set of features. Explainability gives authorized people a meaningful look at the variables which cause the algorithm to send (or not send) a fraud warning regarding a particular transaction. With software engineers keeping an eye on the calculations their algorithms are using, they can also ensure that irrelevant variables — such as the credit card owner’s first name — are not given prominent attention.
Techniques used to enhance explainability
Several methods are commonly used to provide transparency for the calculations used by fraud detection systems.
One of these, called SHAP (SHapley Additive exPlanations), serves as a general guide to how much weight the algorithm assigns to each variable. For each transaction, this formula generates ratings based on a combination of trust-adding factors (such as fitting in with normal spending patterns), and risk-adding factors (such as a transaction being carried out on an unfamiliar digital device). Such an approach summarizes the impact of each variable across the entire dataset.
If SHAP represents a universal key, then the LIME (Local Interpretable Model-agnostic Explanations) method applies that key to the individual case at hand. LIME generates a simple, interpretable model which recreates the calculation process for the evaluation being examined. It offers insight into why the output was generated, thereby building trust for individual cases.
Another helpful explanatory tool involves counterfactual explanations. These involve deliberately changing one variable at a time, as a way of illustrating how such a change would have affected the ultimate evaluation. This method is useful for understanding decision boundaries, identifying potential biases, and letting the user explore different possible scenarios.
Toward a trust-powered economy
Fintech companies rely on transparent, trustworthy, and effective models that not only detect fraud but also explain the reasoning behind these determinations, thereby ensuring the ethical and responsible use of both AI and ML in the financial sector.
The operational benefits of such systems include enhanced fraud prevention, improved risk mitigation, and compliance with regulations such as GDPR as well as the European Commission’s ‘ethics guidelines for trustworthy AI’.
On a more human level, the benefits are perhaps more meaningful. Put simply, effective fraud detection systems protect people’s money. By evaluating the relevant data for each transaction, such systems also increase trust between customers, vendors, and financial institutions. Meanwhile, the added transparency helps to clean up the information space, and maintain ethical standards within the fintech industry.
Of course, no security method is perfect, and scammers of all kinds will surely continue their search for weak spots even in AI- or ML-protected systems. But these smart tools are highly effective against many types of fraud — enabling greater convenience, removing worry, and letting society enjoy a much healthier relationship with its money.