Enhancing Name Screening with AI and Large Language Models.
By Patrick Bowe, Head of Product, AML/KYC at AML RightSource
Jul 30, 2024

This article is contributed content.

Enhancing Name Screening with AI and Large Language Models.

In the evolving realm of financial crime compliance, identity and name screening remains a critical yet challenging task for financial institutions. High false positive rates, poor data quality, and an expanding list of sanctioned entities and individuals complicate the process, often resulting in significant operational inefficiencies. The integration of artificial intelligence (AI) and large language models (LLMs) offer a groundbreaking solution.

These advanced technologies promise to enhance accuracy, reduce manual workload, and transform the way financial institutions manage compliance. Explore the current landscape of name screening, the transformative potential of AI and LLMs, and guidance on how organizations can strategically incorporate these technologies to stay ahead in the fight against financial crime.

The current name screening landscape.

Name screening is a critical component of all compliance programs, globally. Originally aimed at identifying individuals or entities that may be involved in illicit activities such as money laundering, terrorism financing, or sanctions violations, the process has evolved to incorporate identifying high-risk Politically Exposed Persons (PEPs) or State-Owned Entities (SOE) as well as identifying prospects that financial institutions simply do not want to do business with. The process involves comparing customer names against various watchlists. Typically, each list will include some of the attributes described below:

  • Name (actual and/or aliases)
  • Address
  • Nationality or Country of Incorporation
  • Country of Residence or Jurisdiction
  • Identification details
  • Date of Birth or Incorporation
  • Place of Birth
  • Gender

Name screening tools primarily rely on rule-based algorithms to identify potential matches. These methods often result in large numbers of false positives, which are matches where the screening system flags an individual or entity as being a match to a watchlist, but after further investigation it’s determined that the flagged entity is not a true match.

Name screening should be a simple process, but ultimately, every rules-based screening tool casts the net too wide. First, the system will apply an algorithm that matches one name against another, initially considering misspellings, then phonetic matches, and then linguistic variations. A sophisticated screening tool will also apply additional, more complex rules. For instance, only a name match with a corresponding date of birth is considered a true match. but these rules are compromised by either poor data on the watchlist itself, or in a financial institution’s own KYC customer records. Regardless, it’s a system that requires a substantial manual review component by investigators to identify true matches. This potentially leads to compliance fatigue and operational inefficiencies.

As many have said previously, there must be a better way.

Opportunities for AI and Large Language Models in name screening. 

Given that we now live in “the future,” I see many customers looking to deploy new AI and LLM technologies in their name screening processes, and with good reason- the rewards of getting this right, could far outweigh the need for ongoing manual processes. The potential that these technologies offer stands to be truly transformative, especially in a skills-based workforce environment like financial crime compliance. The model calls for specialists to be trained to evaluate complex risks and nefarious activities flowing through institutions. To simply settle for mundane workflows and repetitive “clean up” leads to higher overhead costs, attrition rates, and workforce fatigue. The deployment of these systems can tighten an organization’s compliance gaps, ensuring accuracy in identifying true threats, while simultaneously freeing skilled workers to do what they are best at: investigating real risks – not wading through a sea of false positive alerts.  

Using context to reduce false positives.

AI and LLMs bring one additional tool that elevates rules-based systems. AI systems can dynamically adjust screening thresholds based on the specific context of the transaction or entity. Given the dynamic progression of AI learning and LLM support, an AI system can consider the risk associated with different geographic regions, in real time, and adjust its sensitivity, accordingly, giving more weight to a poor-quality match in a high-risk jurisdiction over a near match in a low-risk jurisdiction. This approach is revolutionary in the financial crime compliance space giving investigators a leg up in combating financial crime, in near real time, while adapting to the evolving threats criminals pose to the global financial system.

Streamline the investigation process.

Traditional machine learning models can leverage historical compliance data and learn which matching scores and patterns correlate with true matches versus false positives. This allows for more accurate predictions and for your screening system to mimic your ‘gold standard’ analyst’s decisions on a case-by-case basis. When you combine this capability with AI’s ability to understand the context and nuance of data within your systems, this model can significantly reduce the burden on analysts without jeopardizing the integrity of your compliance program.

‘Augmented’ decisions.

In the compliance world, there is no such thing as ‘Artificial’ intelligence. Regulators will continue to expect – rightly – that humans continue to make the final decisions on the risks your institution faces.

The rise of Explainable AI means that screening systems utilizing AI will have the ability to make a recommendation as to whether a match is a false positive, while also being able to justify and document that decision. This not only helps compliance analysts understand why matches were flagged by the system but can also set them up to review and confirm the findings of the system.

The feedback from the ‘human intelligence’ can then be integrated back into the system, allowing it to continuously refine the accuracy of its decisions over time, becoming a virtuous circle.

When to incorporate AI and Large Language Models in name screening.

Integrating AI and LLMs into your name screening processes can be a high reward scenario, but along with those rewards come exceptionally high risks, in the form of fines, reputation damage, and operational restrictions for financial institutions that get this wrong. The key to ensuring a successful implementation is patience, due diligence, and effective implementation. Simply throwing “technology” at the problem will only cause more headaches for both your institution, and your regulators. Instead, you must ensure you’re taking the proper steps in documenting the decisions around how you’re structuring your AI and LLMs to address key issues plaguing your institution. You need to create a bulletproof documentation trail that can easily be reviewed internally by all stakeholders.

This process takes time, and patience in supporting your models’ abilities to learn and draw correct conclusions based primarily on the inputs (data) you feed it. While it has been shown that these models can learn very fast, you still must confirm that what they’re learning is correct, checking in at each step along the way to guarantee a successful output. Only then can you begin to look at ways to scale your budding AI to take on the needs of your organization.

Data quality and availability.

“Garbage (data) in, garbage (data) out”, is a universally agreed upon adage in the technology space. A financial institution that has incomplete or inaccurate KYC data will struggle to leverage all the benefits that innovative technologies offer. Given the costs and risks involved, it’s crucial that your new AI and LLM systems are trained on accurate, clean, and high-quality data to maximise their benefits.

It's imperative that your organization embarks on a data quality assessment prior to attempting to implement new AI technologies. Simply put, if you don’t understand your data lineage, accuracy, or completeness, you will not be able to rely with any certainty on the outputs from your new system.

Regulatory and compliance considerations.

Does your organization fully understand the regulatory environment in which it’s operating? Compliance teams are familiar with the process of ensuring that new systems comply with regulatory transparency, fairness, and data security regulations, but the implications of ‘augmented intelligence’ may be a quantum leap too far for some regulators. It’s important to be transparent with your regulators and work with them to establish the expectations related to the introduction and ongoing dependencies associated with models of this kind. Like any tool, these systems are primarily measured on how effective and trustworthy they are, given the expected tasks and output required for a healthy compliance program.

Evaluation and monitoring.

Testing is typically the most important part of any technology project roll out, but this is elevated tenfold when you are implementing systems with AI or LLMs. It is even more important to prove that the systems are operating safely. 

Before embarking on a project of this nature it’s important that you’ve established how you’ll evaluate the performance of the new tools you’re deploying, how these tools will be approved within your organization and how you’ll monitor them in real time to ensure that they continue to conform to your risk appetite. There’s no “one size fits all” when it comes to successful evaluation, implementation, testing, and monitoring. It is imperative that you not only scope your model specifically to the needs of your organization, but that you spend the time necessary to stand up the necessary structures for measuring performance, and ongoing tuning processes as appropriate.  

Wrapping up.

It’s clear that the integration of AI and large language models in a financial institution’s name screening process offers the promise of significant rewards through reduced false positives, dynamic and contextual screening, and providing enhanced decision support. However, these rewards are only obtainable if financial institutions can mitigate significant risks associated with the introduction of new systems into the larger program. With strategic planning, a balanced approach, and the right partners, financial institutions are making significant strides in data quality, regulatory compliance, and model testing. Collectively, a well-trained team and AI can successfully augment existing human intelligence levels, enhancing specific workflows like name screening.

Continue the conversation with the author on LinkedIn.