The G7’s Cyber Expert Group has issued a collective warning that artificial intelligence could reshape both the strengths and vulnerabilities of the global financial system. In a new statement to finance ministers and central bank governors, the group urged governments, regulators, and financial institutions to stay ahead of rapid advances in generative and autonomous AI technologies that are already altering the cybersecurity landscape.
The report emphasizes that artificial intelligence is transforming the way financial firms detect fraud, manage risk, and protect data. Yet the same technologies, if exploited by hostile actors, could increase the frequency, precision, and impact of cyberattacks. The statement highlights that tools once used exclusively to strengthen defenses are now accessible to criminals capable of generating realistic deepfakes, conducting sophisticated phishing campaigns, or developing self-adapting malware.
Officials stress that AI’s growing autonomy introduces new forms of uncertainty. Machine-learning systems trained on vast datasets can help identify anomalies or anticipate system failures, but they can also inherit or amplify hidden vulnerabilities. Poorly secured data, contaminated training sets, or weak human oversight could allow attackers to corrupt AI models, trigger system malfunctions, or leak sensitive financial information.
The G7 group’s position is not prescriptive but advisory. It calls on member states to strengthen cooperation between the public and private sectors, as well as with universities and research institutions, to better understand AI-related cyber threats. The document also urges authorities to promote “secure-by-design” principles when developing AI applications for finance and to ensure that regulatory frameworks evolve in line with technological change.
Financial institutions, the report notes, face particular exposure because of their dependence on complex data infrastructures and third-party service providers. A breach or disruption at a major AI vendor could ripple through global payment systems, credit networks, or customer-facing platforms. The G7 group therefore recommends stronger monitoring of supply-chain dependencies and clearer oversight of AI service providers that support the financial sector.
In the longer term, the experts argue that AI can serve as a powerful ally for cybersecurity—if it is governed and deployed responsibly. Machine-learning systems can improve fraud detection, identify vulnerabilities before they are exploited, and automate responses to incidents that once required hours of manual intervention. To unlock these benefits safely, financial institutions are encouraged to build internal expertise, update risk frameworks to include AI-specific threats, and train staff to recognize both the promise and the danger of AI tools.
As artificial intelligence becomes more embedded in the digital backbone of global finance, the G7 Cyber Expert Group calls for continuous dialogue among governments, regulators, and industry. Only through shared understanding, the statement concludes, can the financial system harness AI’s capabilities without compromising its integrity and resilience.