Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages

More results...

DC/MINT

Artificial Intelligence and its subsets are certainly revolutionising how financial institutions work with new opportunities to improve many aspects of the financial services value chain. Less discussed, however, is the darker side; the inherent risks and vulnerabilities that artificial intelligence can introduce into the system in the pursuit of speed, experience and cost reduction. 

The fact that a model has made a correct prediction is not sufficient. Instead, how the model has come to that decision and the ability to drill down into that logic is critical to ensure regulatory compliance, ethical robustness and reputational risk management.

DC/MINT helps address machine learning interpretability challenges. It is a product for data scientists, validation teams, risk, compliance and other end-users to help across the following key areas:

  • Data bias identification
  • Model validation
  • Model confidence
  • Regulatory compliance
  • Competitive advantage

DC/Compression

Machine learning techniques and architectures have led to models that have significantly improved accuracy rates over their predecessors. However, they rely on millions to billions of internal model parameters, trained over very long periods of time. As more complex models are developed, the desirable qualities of the deep learning models are:

  • Low storage requirements
  • Computational efficiency
  • Fast inference

These qualities become critical factors when considering real-time applications, such as high-frequency trading algorithms where speed is important, or deployment on mobile devices where customer experience is key. Your algorithm can be right 100% of the time, but, if the result is received after the action should have been taken or the customer experience is below standard, the model has failed. 

DC/Compression allows for every type of algorithm parameter, pruning and sharing, low-rank factorisation and knowledge distillation to be used separately and orthogonally. 

In addition, our team of data scientists are available to advise on suitable techniques or ensemble techniques. As a result, our clients can benchmark their models, reduce their complexity and speed them up without a significant impact on the accuracy of the model. 

Used in collaboration with DC/MINT, for AI explainability, checks can then be made post-compression to prove the compressed model is still the ‘same’ model and validate the differences in output to determine whether cost savings, versus speed, versus accuracy trade-off is acceptable to the business.

Leave a Comment