AI AI AI: Quo Vadis? Navigating Innovation and Regulation in an Uncertain Future
The transformative power of AI in the financial services industry is undisputed. Whether we like it or not, AI is here to stay, and the developments are rapid. Every self-respecting company offering financial services is, in one way or another, making use of AI. Embracing AI has become crucial for businesses to thrive. This article will provide you with a quick understanding of what AI is, how it is currently used in the financial sector, the risks associated with AI, and the policy initiatives, rules and regulations governing AI systems.
What is AI
There are many definitions of AI to be found and many confusions among policymakers and the public surrounding AI. However, I have found that scientific articles have been much more useful in understanding what AI is really trying to do, what is needed, and where we currently stand. In order to understand AI better, one must distinguish between the methods used and the goals of the outcomes produced by these methods. Essentially, what AI is trying to do is to mimic human intelligence through a system or machine. In this sense, robots stimulate much imagination. However, to reach a point where the system can effectively mimic human intelligence, it needs methods to replicate cognitive, perceptual, and decision-making abilities.
Currently, machine learning, natural language processing, machine vision, automation and robotics, and neural networks are broadly the methods used to achieve this outcome, and they significantly overlap, as neural networks are also a subset of machine learning, which adds to the complexity( see e.g. article of N. H. Patil et all, Research Paper On Artificial Intelligence And It’s Applications).
Machine learning is a subfield of AI that allows the computer to learn automatically, thus without explicitly being programmed. Deeplearning is a part of machine learning. According to an MIT Sloan professor most parts of AI are done by machine learning and hence the most encountered form / method of AI that is used in real world applications. Machine learning uses a variety of mathematical models and algorithms to do the automatic training.
What is Generative AI (Gen AI)? According to this site, the difference between Gen AI and ML is that Generative AI refers to systems that produce images, text, videos, sounds, and other outputs based on patterns learned from existing data. Machine learning focuses on helping computers adapt and improve by analyzing data and making predictions, judgments, or decisions based on the results. Gen AI is said to use the method of neural networks and hence can be also considered a subset of machine learning. Large Language Models (LLM) is a certain type of Gen AI.
But how about a definition for regulatory purposes? According to the December 2024 BIS report there is currently no globally accepted definition of AI. BIS makes use of the OECD definition, which defines AI as “ An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”
Risks of AI
The use of AI carries substantial risks that can cause harm to humans and society. At its core is the current sentiment that AI is seen as a solution for all problems, while it is merely a tool (techno-solutionism (read more here). This temptation to apply AI to all kind of societal / business problems can create into larger problems. When solving problems, there is the temptation to overcomplicate solutions to solving relatively smaller problems, with the potential problem of dealing with a rigid systems that lacks human proportionality.
According to Center for AI Safety, AI risks can be grouped under 4 key categories, these are:
Malicious use: examples of malicious use would be using AI systems for spreading disinformation on the internet. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news. Disinformation poses serious threats to society, as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. Other examples will be using AI to facilitate sophisticated malicious cyber activity, financial fraud and scams.
AI Race: an environment and sentiment of unchecked AI development, fueled by competition, risks uncontrollable AI, mass job displacement, and escalating global conflicts through autonomous weapons and cyberwarfare, potentially leading to societal instability and even existential threats.
Organisational risks: organisations developing advanced AI can face significant risks of causing catastrophic accidents due to a focus on profit over safety, accidental leaks, and insufficient safety research.
Rogue AIs: Although it still feels like a kind of science fiction, the fear of evil AI can be real. Advanced AIs may misinterpret objectives, deviate from goals, exhibit power-seeking behavior, resist shutdown, and even deceive humans, posing significant risks to human control.
In addition, AI can trick the human brain into thinking that outcomes based on data are entirely objective. However, this presents dangers of bias, particularly the risk of assuming that AI is neutral and impartial. In reality, AI decision-making may often be influenced by biased historical decisions or even blatant discrimination undermining fundamental rights of citizens. We should also fear the use of AI by governments or companies to control civilians. This 'Big Brother is watching you' scenario, with invasive surveillance and privacy infringement lacking fundamental protections, can spiral us down into a Kafkaesque nightmare.
These risks are very real, as the race to be the first adopter and not to fall behind is leading to a scenario where governance mechanisms and institutions are unable to keep up with rapid AI developments, and where AI systems lack sufficient explainability and interpretability.How realistic is it that we can be in control of the AI development?
AI Applications in the Financial Sector
The financial sector is currently in full swing, adopting AI into its business. It has high hopes for AI, and as reported by BIS (utilising Statista), the financial sector is estimated to spend 35 billion USD in 2023, increasing to 97 billion USD by 2027 on AI. On a broader scale, AI is currently the major opportunity for driving GDP growth ( e.g. see this article of BCG)
AI can potentially be used across all business activities and currently the financial sector is using AI for the purpose of improving productivity and efficiency, to support regulatory compliance and risk management and to enhance core business and revenue generating activities.
Key examples are:
Risk management, like assessing credit applications, insurance claims, underwriting, valuation.
Customer support queries / customer service (e.g. chatbots).
Market-making and algorithmic trading.
Personal financial management, Robo-advisors.
Financial analysis, identification of market trends to serve client needs.
Summarise regulations or prepare regulatory submissions, e.g. to support prudential objectives like calculating and reporting capital requirements.
Identify patterns to reveal suspicious activities for fraud detection, complying to e.g. AML/CFT rules.
International Guidelines
AI rules and policies are still very much under development, with varying jurisdictional approaches to regulating AI. The EU has taken a step in regulating AI with the AI Act. Some jurisdictions, like the US, are more hesitant and adhere more loosely to guiding principles.
The 2019 OECD AI Principles which has been updated in May 2024 guides legislators and policymakers in their legislative and policy efforts. 47 countries adheres to these principles. Furthermore, G20 has agreed on these Guiding Principles for using AI, these Guidelines were created based on the adopted OECD guidelines.
The OECD values-based principles are:
Inclusive growth, sustainable development and well-being
Human rights and democratic values, including fairness and privacy
Transparency and explainability
Robustness, security and safety
Accountability
These principles are designed to ensure that all individuals, along with society as a whole, benefit significantly from the continuous advancement of artificial intelligence, aligning closely with the Sustainable Development Goals. Additionally, it is essential to minimize potential risks such as inequality and societal divides. Upholding the rule of law and human rights is paramount, and it is crucial that we maintain control over AI in a manner that is transparent about its workings, including clear information on how and when it is utilized. Furthermore, organizations and governments must be held liable and accountable for guaranteeing the proper functioning and safety of AI systems, ensuring public trust and ethical standards are met.
AI system lifecycle, Source: OECD
The G7 has recognized the critical need to effectively manage the various risks associated with advanced AI systems. The leaders of the G7 affirm that addressing the complex challenges posed by AI risks necessitates the creation and shaping of an inclusive governance framework for AI. This framework should involve diverse stakeholders to ensure that all perspectives are considered in the decision-making process.
The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems (Guiding Principles) and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (Code of Conduct), gives voluntary guidances for actions by organisations that develops the most advanced AI systems.
I will not delve further into this, as EY (click here) provides a good summary of these guiding principles.
The UN Global Digital Compact aims to ensure an inclusive, open, sustainable, fair, safe, and secure digital future for all. The Digital Compact serves as a framework for the global governance of digital technology and artificial intelligence. It articulates objectives to achieve and principles to adhere to, in alignment with the Sustainable Development Goals (SDGs), to foster safe and trustworthy technologies and promote the use of technology for the betterment of human society, including efforts to reduce inequality and poverty.
2 examples: EU AI ACT and AI Ethics Principles of Saudi Arabia
Saudi Arabia has been highly proactive in its approach to artificial intelligence (AI), recognizing its pivotal role in achieving the ambitious goals outlined in Vision 2030. To this end, the Saudi Data and Artificial Intelligence Authority (SDAIA) has been tasked with the critical responsibility of developing and establishing comprehensive policies, guidelines, regulations, and frameworks to ensure the ethical and effective use of AI across the Kingdom.
In September 2023, SDAIA published the AI Ethics Principles, a foundational document aimed at guiding the ethical design, development, deployment, implementation, and use of AI systems in Saudi Arabia. These principles are intended to foster trust and accountability among stakeholders while promoting the responsible use of AI technologies.
All stakeholders involved in AI within Saudi Arabia—including those designing, developing, deploying, implementing, using, or impacted by AI systems—are encouraged to adhere to the seven core principles outlined in this framework:
Fairness: Ensuring equal and unbiased treatment in AI decision-making processes.
Privacy and Security: Safeguarding data privacy and protecting against unauthorized access or misuse.
Humanity: AI systems must be designed to serve humanity.
Social and Environmental Benefits: Leveraging AI to drive positive societal and environmental outcomes.
Reliability and Safety: Ensuring AI systems operate dependably and safe according to the specifications designed.
Transparency and Explainability: Transparent and interpretable AI systems builds trust, with clear tracking of decision-making stages and justifying practices and ethics.
Accountability and Responsibility: Establishing mechanisms for oversight and accountability in AI usage to avoid harm and misuse, AI systems should not deceive individuals or unjustifiably infringe on their freedom..
The EU AI ACT (Regulation (EU) 2024/1689 ) lays down the first uniform legal framework for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the EU. The comprehensive Act prohibits certain AI practices ( e.g. the sole use of AI systems for profiling and assessing the traits and characteristics of natural persons to assess or predict the likelihood of committing a criminal offence) and provides certain obligations for high-risk AI systems. Note that EU financial services law includes internal governance and risk-management rules and requirements which extends to AI systems.
The Act defines AI system as ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. The definition is important as it determines the scope of the rules. An AI system should be distinguished from other programming systems. The EU aims to align the definition with international developments and highlights the differences between AI and other programming systems, particularly regarding AI's ability to make inferences and possess a level of autonomy.
For the financial sector, particular attention should be placed e.g. on using AI systems to evaluate the credit scores or creditworthiness of individuals, as these are considered high-risk AI systems under the Act. Exceptions are AI systems that are used for the purpose of detecting fraud in the offering of financial services.
Obligations for high-risk systems include implementing a quality management system, maintaining technical documentation with the authorities for a period of 10 years, and adhering to specific quality criteria for the training, validation, and testing of the data sets.
The Regulation also aims to leave room to support innovation and the freedom of science by excluding from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. Furthermore, SMEs and start-ups are supported through regulatory sandboxes and real-world testing in order to develop and train innovative AI before its placement on the market.
Conclusion
In conclusion, while there are various policy initiatives on AI, the regulatory landscape remains highly fragmented due to the experimental nature of the technology. The European Union has taken the lead by formally introducing the AI Act, a comprehensive regulatory framework, while other jurisdictions, such as the United States and Saudi Arabia, have primarily opted for a more flexible approach, focusing on non-binding principles and guidelines. This divergence reflects a shared intent to balance regulation with fostering AI innovation, ensuring its potential for economic growth, including significant contributions to GDP, is not unnecessarily constrained.
You might also like:
Sovereign wealth funds (SWFs) are gaining significant traction around the globe, emerging as powerful tools for governments to manage their financial reserves and invest for the future. Among these funds, the Public Investment Fund (PIF) of Saudi Arabia stands out as a crucial catalyst propelling the nation towards a new era of economic diversification and growth.