[ad_1]
Fraud detection of traditional financial services focuses on — surprise, surprise — detecting fraudulent transactions. And there’s no doubt that generative AI has added a powerful weapon to the fraud detection arsenal.

With the aim of identifying fraud patterns in transactions, financial services organizations have started leveraging large language models to closely examine transaction data.
However, there is another, often overlooked, aspect of fraud: human behavior. It has become clear that fraud detection by focusing solely on fraudulent activity is not sufficient to mitigate risk. We need to detect signs of fraud by carefully examining human behavior.
Fraud does not happen in a vacuum. People commit fraud, and often while using their own devices. GenAI-Powered Behavioral BiometricsFor example, they’re already analyzing how people interact with their devices – the angle they hold them, how much pressure they apply to the screen, directional movements, surface swipes, typing rhythm and more.
Now, it is time to broaden the scope of behavioral indicators. it’s time to act With GenAI drilling down into the subtleties of human communication – written and oral – to identify potentially fraudulent behavior.
Using Generative AI to Analyze Communications
GenAI can be trained using natural language processing to “read between the lines” of communication and understand the nuances of human language. The clues uncovered by the advanced GenAI platform can be a starting point for investigation – a compass to focus efforts within the realm of transactional data.
How does this work? There are two sides to the AI coin in communication analysis – the conversation side and the analysis side.
On the conversational side, GenAI can analyze digital communication through any platform – voice or written. For example, every merchant interaction can be scrutinized and most importantly understood in its context.
Today’s GenAI platforms are trained to understand nuances of language that may indicate suspicious activity. Through a simple example, these models are trained to capture objectively ambiguous contexts (“Is our mutual friend happy with the results?”) or unusually broad statements. By combining language understanding with understanding of context, these platforms can calculate potential risk, correlate with relevant transaction data, and flag suspicious interactions for human follow-up.
On the analytics side, AI makes life a lot easier for investigators, analysts, and other fraud prevention professionals. these are the teams overwhelmed with data and alerts, just like their IT and cyber security colleagues. AI platforms dramatically reduce alert fatigue by reducing the overwhelming amount of data – enabling professionals to focus only on high-risk cases.
What’s more, the AI platform empowers fraud prevention teams to ask questions in natural language. This helps teams work more efficiently without the limitations of one-size-fits-all curated questions used by legacy AI tools. Since AI platforms can understand more open-ended questions, investigators can extract value from them by asking broad questions, then drilling down into follow-up questions, without the need to focus on training algorithms first.
building trust
A key aspect of AI solutions in the compliance-sensitive financial services ecosystem is that they are largely available through application programming interfaces. This means potentially sensitive data that cannot be analyzed on-premises is protected behind regulatory-approved cyber security mesh. While solutions are offered in on-premises versions to mitigate this, many organizations lack the necessary in-house computing resources to run them.
Yet perhaps the most difficult challenge to GenAI-powered fraud detection and monitoring in the financial services sector is trust.
GenAI is not yet a known quantity. It’s wrongly treated as a black box – and no one, not even its creators, understands how it reaches its conclusions. This is further exacerbated by the fact that GenAI platforms are still subject to occasional nightmare – Instances where AI models produce outputs that are unrealistic or nonsensical.
Trust on the part of regulators as well as confidence in GenAI on the part of investigators and analysts remains elusive. How can we build this trust?
For financial services regulators, trust in GenAI can be facilitated through increased transparency and interpretability, for starters. Platforms need to clarify the decision-making process and clearly document the architecture, training data, and algorithms of each AI model. They need to create interpretability-enhancing methodologies that include explanatory visualizations and highlights of key features, as well as key limitations and potential biases.
For financial services analysts, building a bridge of trust can start with extensive training and education – explaining how GenAI works as well as diving deep into its potential limitations. Trust in GenAI can be further facilitated through adopting a collaborative human-AI approach. By helping analysts understand GenAI systems as partners rather than slaves, we emphasize the synergy between human judgment and AI capabilities.
Bottom-line
GenAI can be a powerful tool in the fraud detection arsenal. Unlike traditional methods that focus on detecting fraudulent transactions, GenAI can effectively analyze human behavior and language to detect fraud that traditional methods cannot detect. AI can ease the burden on fraud prevention professionals by dramatically reducing alert fatigue.
Yet challenges remain. The onus of building the trust that will enable widespread adoption of GenAI-powered fraud mitigation falls on providers, users and regulators alike.
Dr. Shlomit Labin is VP of Data Science at Shield, which enables financial institutions to more effectively manage and mitigate communications compliance risks. He earned his PhD in Cognitive Psychology from Tel Aviv University.
,
[ad_2]
Source link