James Proudman, Executive Director for UK Deposit Takers Supervision at the Bank of England, spoke on 4th June at the FCA Conference of Governance in Banking about the implications of artificial intelligence (AI) and machine learning (ML) on governance in banking.

Proudman told the audience that the governance of AI adoption is “a topic of increased concern to prudential regulators” since “governance failings are the root cause of almost all prudential failures” and that managing the associated risks is an increasingly important strategic issue for boards of financial services firms.

While Proudman made sure to highlight the potential benefits of AI applications in areas such as securities trading, anti-money laundering (AML), fraud detection and credit risk assessments, he stressed that as a prudential regulator, the Bank of England has a need to understand “how the application of AI and ML within financial services is evolving”, the implications on the risks to firms’ “safety and soundness”, and in turn, how these risks can be mitigated through the banks’ internal governance, systems, and controls.

Speaking in reference to a survey of AI/ML adoption in finance currently being conducted by the Bank of England and the FCA, he stated that there is general agreement that although AI and ML can reduce risks, “some firms acknowledged that, incorrectly used, AI and ML techniques could give rise to new, complex risk types”.

Proudman suggested that the retrieval, processing, and use of data may pose a significant challenge, pointing to three potential causes of data-related risk:

  • the expanding scale of managing problems related to poor data quality as data availability and sources balloon,
  • ethical, legal, conduct, and reputational issues associated with the use of personal data, and
  • distortions resulting from biases in historical data and assumptions built into ML algorithms.

His insistence on the “the need to understand carefully the assumptions built into underlying algorithms” and “the need for a strong focus on understanding and explaining the outcomes generated by AI/ML” sends a clear signal to firms to incorporate ML explainability tools into their model development and validation workflow. Directly applied on a ML model, such tools allow modellers and testers to understand both why any individual decision was taken and how the inputs to a model interact to make it work the way it does as a whole. ML explainability tools can also be applied after AI/ML is approved for use. As Proudman notes, governance has a role to play during the deployment and evaluation stages as well as for correcting erroneous machine behaviour. To ensure proper oversight, a ‘human in the loop’ can make use of explainability tools to support their decision in favour or against shutting down an algorithm, for example.

Proudman further proposed that regulations designed to deal with human shortcomings, such as “poorly aligned incentives, responsibilities and remuneration” or “short-termism” remain as relevant in an AI/ML-centric work environment and that it will be crucial to ensure clear individual accountability for machine-driven actions and decisions.

The implication that individual employees, including senior management, may be held responsible for actions or decisions taken by a machine reinforces the case for facilitating human-friendly model explainability. Boards should think about how the right tools best enable their workforce to comprehend the reasons for, say, a rejected mortgage application, and whether the model that made that decision did so because of built-in human biases. Since the person responsible will not necessarily be proficient in the language of AI/ML, it is crucial that these tools facilitate human-friendly interpretations and, in turn, informed decision-making.

Proudman also affirmed that he sees increased execution risks arising from the acceleration in the rate of AI/ML adoption and proposed that boards should ensure that firms possess the skill sets and controls to deal with these risks.

Boards should heed Proudman’s call to align their governance structures with the challenges of AI/ML. In addition to the obvious benefits to the business, having knowledge of what the models are doing and being able to explain how they work may prove invaluable when it comes to anticipating new rules for transparency and interpretability requirements of ML models.

Other related issues, such as data privacy, also have implications for corporate governance which can be addressed using AI/ML tools. As an example, sending human voice data to the cloud through voice-activated mobile applications may expose users to risks of illegitimate data use and can cause distrust in a firms’ data practices. To avoid this, model compression tools can be applied to reduce the size of speech recognition models and consequently allow voice data to be processed locally so that they never leave the device.

______

Alexander Klemm

Consultant

Delta Capita