Financial institutions are deploying artificial intelligence at more than twice the rate of the regulators meant to oversee them. Research from the Cambridge Centre for Alternative Finance, published with the Bank for International Settlements and the IMF, found that just two in ten regulators report advanced AI adoption, while 24% of supervisory authorities collect data on industry AI use and 43% have no plans to start within the next two years. For finance and accountancy leaders, this creates a window of competitive advantage that rewards those who govern AI well.

The report drew on surveys of 350 financial institutions and fintechs, more than 140 AI vendors, and 130 central banks across 151 countries. Three structural challenges sit at the heart of the regulatory lag: a critical data blind spot, growing concentration risk in AI supply chains, and the emergence of autonomous systems that existing governance frameworks were not built to manage.

The data gap is the most pressing concern. Only 24% of regulators collect data on how financial firms use AI, meaning supervisory dialogue is proceeding without the empirical foundation needed to assess real exposures. Firms that build robust AI risk frameworks, model documentation, and explainability standards now will be better positioned when regulatory data collection matures, as the report makes clear it must.

Concentration risk adds a second layer of vulnerability. The report found that 69% of all respondents rely on OpenAI for AI services, rising to 76% among financial industry firms. This dependence on a single provider creates exposure to pricing shocks, supply disruptions, and resilience failures. For CFOs and risk functions, AI vendor diversity and continuity planning deserve the same governance rigour as any other critical third-party dependency.

The Irish dimension is direct. The Central Bank of Ireland has been designated the national competent authority for financial AI oversight under the EU AI Act, with enforcement intensifying from August 2026. In its 2026 Regulatory and Supervisory Outlook, the Central Bank identified data, modelling, and AI risks as having increased significantly, warning that widespread third-party AI adoption calls for stronger model governance, transparency, and accountability.

The practical implications for Irish accountancy and finance functions are clear. Governance frameworks must treat AI as a risk management issue, not an innovation project. Vendor due diligence should explicitly assess AI provider concentration, continuity, and contractual accountability. Audit committees and boards should receive regular reporting on AI model risk, including exposure to autonomous systems that third-party vendors control.

The Cambridge Centre findings are best read as a positive mandate. Firms that close the governance gap ahead of regulatory requirements will not only be compliant; they will be better managed, better protected, and better trusted by clients and counterparts watching the same data.

(The views expressed by the writer are their own and do not necessarily reflect the views or positions of BusinessRiver.)