1
By Craig Adams, Managing Director, EMEA, ProtechtÂ
Â
Â
Â
Â
With ChatGPT taking the world by storm and Nvidia announcing that artificial intelligence (AI) has already helped 36 percent of financial-services execs reduce costs by 10 percent or more, the rise of generative AI (GenAI) is transforming the ways financial companies are managing their key business functions.
However, current iterations of AI technology, such as ChatGPT, are far from perfect, with numerous limitations that must not be overlooked or ignored. This article will examine the business cases for and against using AI for risk management and compliance in the financial industry and why organisations should take action but proceed cautiously at this delicate stage in this ground-breaking technology’s lifecycle.
The dawning of the AI era
The launch of ChatGPT in November 2022 seemingly marked the start of a new technological era. OpenAI’s revolutionary chatbot reached one million users in just five days and surpassed 100 million monthly users in two months—a staggering adoption rate. It wasn’t long before organisations across all sectors were exploring opportunities to replace workers with AI, while newspapers around the world were quickly filled with stories about AI doing a better job than humans at all kinds of tasks and activities.
Of course, not everyone has welcomed the arrival of AI with open arms. Numerous countries have moved swiftly to ban it, while major question marks still linger around concerns such as data privacy and job security. However, with the dam now well and truly burst, not only is AI here to stay, but it looks set to revolutionise how many organisations function at fundamental levels.
Reshaping risk and compliance management
Effective risk and compliance management is a core pillar of the modern financial industry, and getting it wrong can prove extremely costly. This was perfectly illustrated recently when the United Kingdom’s Prudential Regulation Authority (PRA) fined Credit Suisse International and Credit Suisse Securities £87 million for significant failures in risk management and governance between January 1, 2020, and March 31, 2021, in connection with the firms’ exposures to Archegos Capital Management. It was the PRA’s highest fine and the only time a PRA enforcement investigation established breaches of four PRA Fundamental Rules simultaneously.
One of the things that makes managing financial risk and compliance so difficult is how time-consuming it can be. As organisations grow, they become more complex, forming new global business partnerships and processing increasingly large amounts of data, which brings more risk and compliance laws into scope.
This is where AI presents a huge opportunity, thanks to its ability to automate mundane, everyday tasks, offer rapid assessments and help financial institutions better understand and manage the risks they face. Whether identifying gaps in policies and control frameworks or analysing thousands of pages of regulations across multiple jurisdictions in seconds, the potential is truly enormous.
Not a risk-free endeavour
Â
That’s not to say AI doesn’t create a few risks of its own. First and foremost, risk-management and compliance functions are in the very earliest stages of AI integration, which means the current lack of understanding will almost certainly lead to teething problems and mistakes. Indeed, many risk professionals are working round the clock to understand how best to integrate AI into long-established, well-run programmes and processes retrospectively.
Furthermore, AI is far from flawless in its current iteration. For all the positive headlines generated, chatbots such as ChatGPT have also garnered numerous negative ones, particularly relating to high-profile gaffs, biased content and limited knowledge of the world beyond 2021 (at least for now).
Therefore, to make the most of AI’s vast potential without falling foul of its current limitations, financial institutions need to look closely at both the opportunities and challenges it presents, putting risk and compliance managers at the forefront of its exploration to find the right path forward to successful implementation. In fact, understanding the technology, its applications and the risks it poses should all be considered fundamental requirements before partial or full-scale deployment is even considered.
Better than humans?
As mentioned earlier, one of the biggest opportunities AI presents to financial institutions is its ability to automate and speed up important but time-consuming tasks with which humans often struggle because of their mundane nature. For instance, it can assess thousands of pages of complex global regulations before accurately recommending exactly where specific regulations apply. This capability can significantly reduce the workload of a bank’s risk and compliance team, enabling team members to spend much more time on strategically important activities while improving overall business security.
However, it’s important to note that AI-powered systems are only ever as good as the data with which they have to work. All AI systems learn by processing information made available to them; if this information happens to be inaccurate or biased in any way, it will quickly influence the reasoning of the AI system, leading to poor outcomes. In a risk-management context, such issues can cause major issues. For example, if an AI relies on flawed data, it may fail to identify critical risks or comply with relevant regulations.
Another crucial concern is the potential replacement of human workers and the impacts on the wider employment market. While it’s clear that AI will increasingly be used to automate a range of functions currently carried out by human staff members, replacing people entirely isn’t without its drawbacks. Most obviously, there is inherent and irreplaceable value in human insight, judgement and decision-making, especially in areas as critical as risk management, where experience plays a massive role across the board.
Finding the right balance is key.
With all these considerations in mind, how can organisations find the right balance that allows them to reap the benefits of AI while protecting against its inherent risks? In an ideal world, a carefully structured approach should be taken whereby compliance and risk managers are empowered and supported to examine the implementations of processes and procedures for AI deployment with full transparency and visibility across the organisation.
Now is the time to take action to understand the risks and opportunities and start building business cases for transformational change. Test and learn initiatives should be considered and deployed so learnings are quickly gained, assessed and actioned with the appropriate balance between human capital and operational efficiency.
In the longer term, it is vital to establish effective controls over the remit given to AI and its performance levels. These should include committing to manual oversight, ongoing ad hoc testing and implementing other relevant mechanisms to ensure AI operates within the organisation’s risk appetite and compliance framework. In this context, a hybrid approach, whereby AI and humans work in tandem, will most likely provide the best results.
It’s important to remember that we are at only the beginning of a very exciting journey with AI. If ongoing hurdles can be overcome, there’s no reason why AI can’t hugely impact the capabilities of risk-management and compliance functions throughout the financial industry. In particular, as regulatory environments worldwide continue to change rapidly, AI’s ability to adapt and provide insights into emerging risk requirements at speeds far beyond human capabilities will likely prove invaluable.
On the other hand, question marks continue to hang over AI and its uses, which will likely linger for the foreseeable future. A growing voice is already calling for AI to be better controlled and/or regulated until more is known about it. How this conversation unfolds will significantly impact the uptake in the months and years to come.
This aside, the AI gold rush is in full flow, and those capable of quickly implementing a well-considered and structured approach to AI are likely to benefit most from the improved risk-management capabilities it offers.
Â
Credit: Source link