AI Supervision in the Netherlands: Keeping Innovation Safe

On April 9th, the Dutch Central Bank (DNB) and the Dutch Authority for Financial Markets (AFM) published a report (Dutch text only) on the impact of Artificial Intelligence (AI) on the financial sector and supervision. The report is noteworthy for the financial sector to comprehend the supervisors' stance and perspective on AI in the Dutch financial sector.

Definition

There is no single definition of AI. The Dutch Supervisors use the OECD definition, which has also been incorporated in the AI-Act: "a machine-based system designed to operate with varying levels of autonomy. It may exhibit adaptiveness after deployment and, for explicit or implicit objectives, infer from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Growing Trend

The Dutch Supervisors see a growing trend in which AI is embraced by the financial sector. In the financial sector AI is mostly used in chatbots, creditworthiness and algorithmic trading, fraud detection, customer service, Investment strategies, risk management, and compliance.

Dutch supervised entities (banks, insurance companies, investment companies, etc.) see the potential of generative AI, but are still a bit hesitant to use it fully, partly due to reasons of privacy and uncertainty regarding rules and regulations. For example, the use of the public version of ChatGPT is forbidden by many Dutch banks due to the risk of confidential information being stored on external servers.

Opportunities and Risks for the Financial Sector

The Dutch supervisors acknowledge the potential of AI for the financial sector and recognize numerous benefits. AI can lead to better customer service by enabling more personalized products and faster service. Additionally, AI has the potential to optimize sales channels and create new revenue streams. It can assist in price optimization, lower costs by optimizing processes, and automate tasks.

The main risks mentioned include incorrect data input, data privacy concerns, model bias, discriminatory or unfair biases in AI-generated outputs, and cyber risks.

Supervision on AI

The Dutch supervisors considers it important, in their supervision and regulation, to strike a balance between maintaining the existing objectives of financial supervision and not obstructing innovation in the financial sector. AI systems must not jeopardize the financial soundness and integrity of financial institutions and should not also compromise on customer interests, fairness, and transparency between parties in financial markets. However, Innovation is considered crucial for the competitiveness of financial institutions and thus for a healthy financial sector, provided that risks are adequately managed. Where supervision effectively manages these risks, sufficient room for innovation should be provided.

Moreover, it is desirable to establish clarity regarding the application of regulations and the methods by which institutions must demonstrate compliance with their techniques. In the coming years, Dutch supervisors will enhance their understanding of AI and adjust supervision procedures and methodologies accordingly to new developments. They will intensify supervision of AI use in the financial sector, categorizing it into input (data governance and quality), throughput (employee competence, human oversight, traceability, and documentation), output (transparency, explainability, target audience determination, etc.), and overall focusing on integrity, sound governance, and operational resilience.

Image Description

source: website European Parliament

Previous
Previous

The Bitcoin Boom Continues: Hong Kong Approves Spot ETFs

Next
Next

Is the World Economy Cooling Down? Insights from the February 2024 OECD Report