Editorial

Fighting financial crime with AI: how to maximise the benefits

The demands of financial crime prevention (FCP) are ever-changing. Operational teams in financial institutions (FIs) are often inefficient and ineffective in their attempts to meet these evolving dem

Contributor

Avatar Placeholder

The demands of financial crime prevention (FCP) are ever-changing. Operational teams in financial institutions (FIs) are often inefficient and ineffective in their attempts to meet these evolving demands, creating significant risks. To address these challenges, it is critical to adopt artificial intelligence (AI) to improve efficiency and control.

The state of AI in financial crime prevention

For some time, financial institutions have struggled to balance operational FCP challenges with innovation. Regulatory pressure emphasised focus on compliant, rather than practical and structural, solutions. The result is growing process inefficiency.

However, opportunities for quick wins are emerging through automation. These include the extensive potential benefits of AI in anti-money laundering (AML). AI solutions can improve productivity by identifying high-risk situations that require human interaction; and detect hidden risks between siloed processes.

But AI is still only a niche for enthusiasts of the technology. Wider adoption has been slow because embedding AI is complex, and there are several challenges in realising the benefits. The biggest difficulties fall into three groups: skills; data quality and availability; and transparency and understanding.

Skills

For AI adoption, FIs still tend to focus on hiring data science and internal ratings-based model specialists who are not always experienced in using AI. But AI skills should no longer be the exclusive domain of number-crunchers and data wizards. A growing number of people can use analytics without complex techniques. Algorithms are increasingly generated automatically, which changes the required expertise. Skills should now focus on understanding and interpreting results.

Still, finding people that can use the technology effectively is difficult. As FIs also sometimes lack critical knowledge of the underlying processes and control measures, high-performing project teams must include both technical and FCP expertise to operate effectively. This adds difficulty in recruiting the necessary resources.

Handling alerts or signals from AI systems also requires different perspectives for operational analysts. Rule-based systems mostly have black-and-white decision-making processes. Using AI to analyse client behaviour requires a more proactive, risk-based, and client-centric approach - and more professional judgement.

Data quality and availability

Quality data is a critical precondition for implementing AI. But data standards often fall short and FIs struggle to keep all client information up to date.

Also, client data is often duplicated in internal systems or stored in silos. For example, if information was set up based on siloed product lines, this can make it difficult to analyse data automatically and holistically.

Understanding, interpretation and transparency

The use of AI in FCP is still controversial, especially among regulators. This makes it important to emphasise that data and analytics models are used ethically. FIs should be aware that:

  • Data can contain bias
  • They should be able to reconstruct the automatic decision-making for auditors to see
  • Appropriate manual safeguards and rigorous testing are essential.

In one recent example, a German bank blocked hundreds of customers’ accounts after tightening its automatic controls. The following reputational backlash highlights the risks if the points above are not addressed adequately. This also underlines the importance of aligning with internal stakeholders, such as compliance and audit, who must understand and support the goals of using AI and include this in their policies.

How to address these challenges

Data

The first step is recognising data as a general problem as well as an overarching opportunity. For instance, commercial initiatives can benefit from client-centric data structures. Institutions should seamlessly update information by connecting to official public sources, such as Chambers of Commerce; and regularly asking clients to validate their data. Underlying issues should not be addressed from an FCP perspective only, but with a much broader scope.

Technology

Think carefully about the decision to make or buy AI technology. Not all institutions are in a position to assemble specialised teams. But effective, dedicated third-party players are available.
FIs that trust their in-house capabilities must strike the right balance between continuous experimentation and frequently bringing relevant AI uses into production. Showcase examples include:

  • Using AI in transaction monitoring (TM) to triage or prioritise alerts from rule-based scenarios.
  • Anomaly detection - generating alerts for a specific risk that existing rules cannot easily detect. AI provides more context to help draw the right conclusions.
  • Increasing name matching efficiency in sanctions screening.


The time is now

As we’ve highlighted, AI is not easy to adopt. But being aware of the challenges and developing a solid strategy will enable a plethora of possibilities. The time is ripe to start reaping the benefits.

You may be midway through your digital transformation and need help prioritising projects and getting back on track. Or you might be starting off and need to create your roadmap. Either way, Delta Capita can help you achieve your goals. Get in touch today.