The past quarter has seen a surge in artificial intelligence (AI) advancements that surpass the cumulative progress of the preceding two decades.
Contributor
Niamh is a technology leader with experience managing complex transformation projects with an academic background in computational neuroscience and neuroeconomics.
Generative AI (GenAI)—a subset of AI that creates new content based on training data is experiencing an unprecedented rate of evolution. The rapidity of release cycles, the growing volume of new start-ups, and its swift integration with existing applications is contributing to an expanding array of use cases across various industries.
This dynamic landscape is not only reshaping our understanding of AI capabilities, but also redefining the trajectory of future technological innovation. However, it is also democratising power and influence by lowering the entry barriers, as evidenced by the escalating prevalence of misinformation and deepfakes.
As a result, there is a growing global urgency to establish robust and comprehensive AI safety regulations. Last week’s AI Safety Summit marked an important milestone in the quest for a unified international perspective on AI. Now, the field is gearing up to navigate the legislative landscape shaped by the forthcoming EU Artificial Intelligence Act (EU AI Act).
In this briefing we will review:
The Current Paradigm of AI
The global paradigm for AI is defined by equal parts (1) optimism for social and economic outcomes, (2) urgency to establish enduring ethical guardrails, and (3) a cautious outlook on emerging challenges at the frontier of this technology.
Developments in AI and GenAI have been rapid. OpenAI has played a pivotal role in this landscape by democratising access to commercialised large language models (LLMs) through simple APIs and interfaces such as ChatGPT. This has opened a world of possibilities for developers and businesses alike, enabling them to leverage these powerful models for a myriad of other applications.
Simultaneously, tech giants like Google and Amazon have also made significant strides in this domain, investing heavily in the development and deployment of these models. The collective effort has not only accelerated the progress of AI but also broadened its reach, making it an integral part of our digital ecosystem.
However, as we navigate this exciting era of AI, it’s crucial to balance innovation with responsibility. The need to ensure that AI is used in a way that is safe, fair, and beneficial for all is more pressing than ever.
The AI Safety Summit hosted by the UK
Over the past few months, the international landscape has been marked by a flurry of nebulous and fragmented declarations from nations and global institutions. These proclamations, seemingly an attempt to position themselves as ‘leaders’ rather than ‘followers’, reflect the escalating global endeavour to ensure a future underpinned by safe AI.
For example, having previously held a lenient stance, the White House surprised the AI community last week by issuing an Executive Order that described a broad set of AI rules and guidelines, nudging domestic policy towards mandating innovation in content labelling and watermarking. It also requires that companies developing new AI models of a particular size must notify the federal government and disclose test results.
Similarly, The Cyberspace Administration of China released draft regulation in April with a view to ensuring that GenAI services align not only with legal and ethical values but are in step with socialist principles.
Both nations sent representatives to the AI Safety Summit hosted at Bletchley Park, alongside attendees from governments, tech organisations, and international institutions.
The key outcomes were (1) agreement that an international expert body—like the Intergovernmental Panel on Climate Change—will be put in place, and (2) the welcome announcement that leading technology companies (namely Meta, Google Deepmind, and OpenAI) will voluntarily offer their AI products to regulators prior to public release.
The goal is to temper the development of systems that pose a threat to humanity, while simultaneously fostering beneficial innovation. In this regard, the Summit signifies a pivotal moment in the quest for an international consensus on safe AI.
However, a unanimous agreement on the precise nature of required regulation, if any, remains elusive. Despite this, it is expected than in 2024, the practical requirements for developers and purchasers of AI products in the EU will be formalised with the enactment of the EU AI Act.
The EU AI Act
In June 2023, the European Parliament voted to accept a position on the Artificial Intelligence Act, and EU lawmakers are now working to adapt the framework in response to the latest developments in generative AI, with a view to finalising legislature by the end of this year. This will:
(1) Enshrine a technology-neutral definition of AI systems and
(2) Adopt a risk-based approach with specific requirements and obligations for market engagement.
The act itself will take a risk-based approach to AI systems produced or consumed within the EU market. Though it is likely to be reviewed in response to discussions at the AI Safety Summit, the current understanding of the framework is as follows:
Table 1: summary of risk framework for AI systems based on the second edition of the EU Legislation in Progress: Artificial Intelligence Act (Artificial intelligence act (europa.eu))
It is anticipated that this legislation will be enacted early in 2024. The Republic of Korea has agreed to host a virtual AI summit within the next six months, and France will host an in-person summit towards the end of 2024 to ensure momentum is maintained.
GenAI in Financial Services
The GenAI market landscape is saturated across Consumer, Enterprise, and Prosumer segments. There is no question that there is a wealth of use cases and a strong customer demand, as evidenced by impressive uptake rates for new products. However, for GenAI to sustain its growth and improve retention rates, it must demonstrate long-term value and potential for consistent innovation.
In the financial sector, the application of AI, and GenAI, is far from novel, with its implementation spanning a wide breadth of mature use cases. The primary value stems from how these technologies augment and accelerate existing teams and processes, rather than replacing functions. Without a doubt, the assortment of tools and the magnitude of deployment we can anticipate in the forthcoming months and years will continue to expand, further solidifying the role of AI in this industry.
Preparing for Regulatory Criteria
At present, the market leans towards a principles-based approach to AI that provides guidance whilst fostering ongoing innovation. Future regulations—including the EU AI Act—are unlikely to deviate significantly from this trajectory, making it prudent to act now to ensure readiness for upcoming criteria.
A recent report by the Association for Financial Markets in Europe (AFME) suggests that compliance officers face two primary challenges in adopting AI in financial markets. The first is gaining a comprehensive understanding of AI usage to effectively scrutinise its implementation; the second is exploring how AI can be deployed within the Compliance function to enhance outcomes.
Investing in knowledge advisors and constructing future state models with sustainable and compliant AI systems in mind will pave the way for success as the paradigm continues to evolve.
How Delta Capita can help
As an authority in the field, Delta Capita is uniquely positioned to guide and collaborate with you in realising your artificial intelligence (AI) and generative AI (GenAI) objectives. Our team has a wealth of expertise in data and technology, complemented by an innovative ecosystem of proprietary products.
Whether you are looking to integrate AI into your existing operations, explore new opportunities with GenAI, or navigate the complexities of AI in compliance, we’re here to provide the support and solutions you need.
Feel empowered to leverage the power of AI to drive your business forward. Contact us today to understand how we can help.