17th September 2025
Key takeaways
Artificial Intelligence (AI) adoption in finance is accelerating, giving private sector firms speed and agility that overwhelm human-centred supervision. Unless the authorities respond, they risk losing control of an AI-driven system. To remain effective, the supervisors must:
- embed AI capabilities within core supervisory functions
- favour domestic or collaborative AI engines to reduce dependency risks
- use techniques like federated learning to overcome data-sharing barriers
- build real-time AI-to-AI interfaces for supervision and stress testing
- monitor AI adoption and vendor concentration across the financial sector
Introduction
The technological challenge brought on by AI differs from other technological advancements in finance, such as the initial adoption of computers and the more recent adoption of automatic trading systems. The challenge lies both in how rapidly AI has been adopted and in its very nature. As Norvig and Russell (2021) note, AI is a rational maximising agent, one that not only analyses and recommends but also makes decisions.
There is little, or nothing, the authorities can do to slow down the move towards autonomous AI systems in the private sector. Banks that deploy AI gain immediate competitive advantages in risk and liquidity management, trading and customer service, forcing competitors to follow or risk losing market share and even failure.
The supervisory authorities, operating with fewer resources and competing for scarce expertise, find it difficult to keep up with the private sector. The consequent asymmetry in AI use is set to widen over time, increasing the risk of an ineffective supervisory structure and costly financial crises.
The risks are not theoretical. Scheurer et al. (2024) found that a large language model (LLM) instructed to comply with securities laws engaged in illegal insider trading in controlled experiments, and lied about it when profitable. Such behaviour, even in test environments, illustrates the difficulty of predicting how an AI will act in real market scenarios.
Our earlier research (Danielsson, Macrae and Uthemann 2023; Danielsson and Uthemann 2025) examines how AI affects systemic risk and the effectiveness of financial regulation. We build on that work here, focusing both on the operational realities facing the financial authorities and how they can best respond.
How AI affects the mission of the financial authorities
The financial authorities face a difficult dilemma. The same AI that helps them in executing their mission also undermines their control by helping market participants to identify and exploit regulatory gaps more rapidly than human overseers can respond to breaches.
At the micro-level, AI is very good at searching for regulatory arbitrage, perhaps identifying strategies that achieve the same economic outcome but with lower capital charges than would otherwise have been incurred. Furthermore, AI-driven pricing algorithms excel at price discrimination, offering targeted products to customers that increase revenue while reducing consumer surplus. This might technically comply with fairness rules, but in ways the authority may not have intended.
AI also helps those intent on exploiting and damaging the financial system. Criminals, terrorists or hostile states use AI to identify weaknesses and coordinate attacks on financial infrastructure. Those attackers need only one successful breach, whereas the defenders must guard the entire system, forcing them to spread their resources even more thinly. We call this “the defender’s dilemma”.
AI might create risks that current monitoring frameworks miss. While AI is rapidly taking over functions such as compliance, credit allocation and, most importantly, liquidity management, systemic risk dashboards are based on past performance and practices. The consequence can be a monitoring framework that, by its very construction, misses emergent risks.
Ultimately, AI gives rise to wrong-way risk, when the risk it creates is the highest in times when we have the greatest exposure to that risk factor.
AI crises
Financial crises occur when institutions shift from maximising profits to maximising survival, the one-in-a-thousand-day problem discussed by Danielsson (2024). This has always been a characteristic of financial crises, at least since the first modern one in 1763, as noted in Danielsson (2020).
AI accelerates crises through its unmatched capacity to monitor the system, evaluate strategic alternatives and execute complex decisions at a speed no human can match. When a shock occurs, the AI engines parse vast streams of market, macroeconomic and competitor data in seconds, rapidly updating forecasts and adjusting positions. This speed advantage means that by the time supervisors register an abnormal market move, significant shifts in liquidity or asset pricing may already have taken place.
Strategic complementarities arise when AIs monitor and interpret one another’s visible market footprints, detect subtle changes in behaviour and infer intent. When one system moves in response to a stress signal, others may interpret it as confirming evidence and adjust accordingly. The result is self-reinforcing action across institutions, creating a rapid convergence of behaviour even without direct coordination. This is not illegal, and there is nothing the authorities can do to prevent such behaviour.
Similarity in AI design and operation reinforces this tendency towards synchronisation. Many institutions procure systems from the same small set of vendors, often relying on similar architectures for critical functions such as liquidity management and risk control. These systems are frequently trained on overlapping datasets and optimised for comparable objectives. Even if vendors differ, aligned retraining cycles and model updates mean multiple AIs can reach the same conclusion at almost the same moment when faced with new information.
The combined effect is to compress the timeline of crises. Events that once unfolded over days or weeks can now play out in minutes or hours, leaving almost no window for policy intervention.
While such systems smooth out minor fluctuations in calm markets, their tendency towards rapid, coordinated action under stress increases the probability and severity of extreme market moves. This dynamic lowers observed day-to-day volatility, but produces a fatter-tailed distribution of outcomes.
Supervision in the AI era
The fundamental motivation for regulating the financial system is to align the interests of the private sector with those of society. The authorities have created an extensive and effective supervisory structure that monitors behaviour and regulates the conduct of the private sector. In technical language, this is a principal–agent problem, where the principal (the supervisor) seeks to make the agent (the bank) act in the interest of society.
That relationship changes with AI, as the one-sided principal–agent problem becomes two-sided: principal–agent-AI. The supervisors seek to control the behaviour of banks, who in turn must control their AI. Unfortunately, the way we incentivise, the carrots and sticks inherent in the supervisory structure, do not work with AI. Generally, banks can explain neither how AI works nor how it makes decisions. Meanwhile, the supervisors cannot effectively regulate algorithms to which penalties, reputational damage and bonus clawbacks mean nothing.
Ultimately, this implies that the current slow and deliberate human-centred control system is not very effective in controlling a much more agile AI system.
The speed advantage that AI brings to the table enables private AI to rapidly identify and exploit regulatory gaps, such as structuring trades to reduce capital charges, or allocating and classifying credit in a way that minimises regulatory capital while maximising profit and risk. Perhaps, Basel III’s liquidity coverage ratio, with its 30-day outflow assumptions, might not afford the desired protection as AI takes over the treasury functions of banks.
Furthermore, AI brings new challenges in accountability. It is already very difficult to hold individual bankers to account. That becomes even harder as AI use proliferates. When an AI decision results in an undesirable outcome, perhaps serious market disruption, it can be difficult or, more likely, impossible, to determine whether responsibility lies with the deploying institution, the vendor, individual executives or the model’s developers. Investigating such events is frustrated by the complexity of proprietary architectures with transient decision states that leave a limited audit trail. Without clear attribution, enforcement could either penalise the wrong actors or create incentives for opacity, undermining both deterrence and recovery. That creates new fruitful avenues for those intent on exploiting the system for private gain.
Policy responses
The authorities have no choice but to react to the rapid adoption of AI. They will not find that easy; for instance, it will be very difficult to reorient the supervisory structure and acquire the necessary AI resources. The alternative is an increasingly outdated and ineffective regulatory structure, one that risks becoming not fit for purpose.
There are several concrete steps the authorities can take. To begin with, they need to develop their own AI capabilities directly inside the operational functions of the authorities. The financial stability divisions and the supervisors should take the lead on AI in their organisations, not leave it to auxiliary divisions such as IT, data or innovation.
One area that presents particular difficulties is the implementation of AI engines. Choosing between commercial and internal AI engines is not easy. The authorities are justifiably reluctant to use commercial systems, especially from vendors in foreign jurisdictions, as there is a significant chance of data leakage and confidentiality violations. The alternative is to develop their own internal engines, either directly or by using open-source engines. While seemingly attractive, we suspect that most authorities will find it difficult to the point of impossible to allocate the necessary financial and human capital resources to meet the challenges arising from AI. After all, the costs of a high-functioning private AI engine runs in the hundreds of millions, even billions, of dollars.
The vast middle ground might be to engage with vendors in the local jurisdiction to set up high-quality AI engines that can be used for authority purposes. It is important to do this in the local jurisdiction as it allows the authority to exercise the necessary control, which is impossible to do with a foreign system.
AI can also help the authorities with problems they have found difficult or impossible to solve before. The financial system is global, but the authorities operate inside narrow and jealously guarded silos, where they are unwilling and not legally allowed to share data across silos. Here, AI can be of assistance. Authorities across multiple jurisdictions could, for example, set up a single AI engine, perhaps for micro-supervision, such as monitoring money laundering or fraud; or macro-supervision, such as monitoring global stability.
Restrictions on the sharing of data preclude doing that today. However, authorities can leverage a technique called federated learning: training takes place locally inside each authority and on data it controls, while only model weights are shared to create a global neural network. Since the neural networks are significantly over-parameterised and are the result of optimisation across multiple jurisdictions, there is practically no way to reverse the engine weights back onto individual data points, protecting confidentiality.
Furthermore, AI creates new ways to implement real-time supervision. Current supervisory approaches are based on periodic reports and inspections, which are too slow for an AI-driven financial system. It is technically straightforward to set up a direct AI-to-AI communication link (known as an API interface) that allows the authority AI to communicate directly with the private-sector AI, perhaps to test responses and benchmark regulations.
This could build on recent innovations such as the Bank of England’s 2024 “system-wide exploratory scenario” (SWES) exercise, which incorporated interactive elements that allowed participants to adjust strategies in response to evolving conditions during the simulation.
Fast crises require fast responses. Current crisis-intervention facilities, some triggered, but most based on human decisions and even committee meetings, are likely to be too slow. This suggests that the authorities should set up automated facilities, perhaps to release liquidity at the same time that private-sector AI is contemplating whether to run in response to an external shock.
Finally, the authorities should keep track of AI use in their monitoring frameworks. Directly identifying AI adoption on a divisional level in the private sector (such as risk management, credit and treasury functions) would be a fruitful avenue. This would include the type of AI engines used, how they are trained and where they are obtained from. This same framework should also monitor vendor concentration, since dependence on a small set of providers increases the risk of synchronised behaviour during stress.
Conclusion
The financial authorities find it difficult to meet the challenges arising from AI. It requires a significant rethink of the supervisory process and how to fund the necessary AI technology.
If the authorities proactively engage with AI, they can markedly improve the supervisory process and stabilise the financial system. If they do not, the likely outcome is more misbehaviour, fraud, instability and financial crises.
Ultimately, the likelihood of a crisis and other undesirable outcomes is directly related to the extent to which the authorities engage with AI. If private-sector AI concludes that the authorities are not on the ball, crises become more likely.
Bibliography
Danielsson, J (2020), The Illusion of Control. New Haven: Yale University Press.
Danielsson, J (2024), “The one-in-a-thousand-day problem”, VoxEU, 24 December. https://cepr.org/voxeu/columns/one-thousand-day-problem.
Danielsson, J, R Macrae and A Uthemann (2023), "Artificial Intelligence and Systemic Risk", Journal of Banking and Finance 140: 106290.
Danielsson, J and A Uthemann (2025), “Artificial intelligence and financial crises”, Journal of Financial Stability 80: 101453.
Gambacorta, L and V Shreeti (2025), “The AI Supply Chain”, BIS Papers No 154.
International Monetary Fund (2024), Global Financial Stability Report: Steadying the Course: Uncertainty, Artificial Intelligence, and Financial Stability (October), Washington, DC: International Monetary Fund. https://www.imf.org/en/Publications/GFSR/Issues/2024/10/22/global-financial-stability-report-october-2024.
Norvig, P and S Russell (2021), Artificial Intelligence: A Modern Approach. London: Pearson.
Scheurer J, M Balesni and M Hobbhahn (2024), “Large language models can strategically deceive their users when put under pressure”, Technical Report. https://doi.org/10.48550/arXiv.2311.07590.
Bank of England (2024), The Bank of England's system-wide exploratory scenario exercise final report. London: Bank of England. https://www.bankofengland.co.uk/financial-stability/boe-system-wide-exploratory-scenario-exercise/boe-swes-exercise-final-report.
Acknowledgements
Any opinions and conclusions expressed herein are those of the authors and do not necessarily represent the views of the Bank of Canada.

Jón Danielsson is one of the two Directors of the Centre and Reader in Finance at the LSE.
Since receiving his PhD in the economics of financial markets from Duke University in 1991, Jón’s work has focused on how economic policy can lead to prosperity or disaster. He is an authority on both the technical aspects of risk forecasting and the optimal policies that governments and regulators should pursue in this area.
Jón has written three highly regarded books: The Illusion of Control (Yale University Press, 2022), which was included on the Financial Times “Best books of 2022” list; Financial Risk Forecasting (Wiley, 2011); and Global Financial Systems: Stability and Risk (Pearson, 2013). He has also contributed numerous academic papers on systemic risk, artificial intelligence, financial risk forecasting, financial regulation and related topics to leading academic journals, including Review of Financial Studies and the Journal of Econometrics.

Andreas Uthemann is a principal researcher at the Bank of Canada and a research affiliate of the London School of Economics’ Systemic Risk Centre. His research is in financial economics with a particular interest in market structure and design, financial intermediation, and financial regulation.