A new policy paper from the Organisation for Economic Cooperation and Development (OECD) is urging governments to act quickly to strengthen their ability to manage artificial intelligence. The report, Governing with Artificial Intelligence, sets out how public institutions can use AI to improve efficiency while preventing the technology from undermining fairness, accountability, or trust.
The OECD’s central message is that governments must be ready not only to regulate AI but also to govern with it. That means understanding how algorithms affect decisions in welfare, healthcare, or education, and ensuring the systems deployed by public agencies remain transparent and open to scrutiny. The report stresses that technical capacity and public oversight should advance at the same pace as innovation.
From guidance to governance
AI now shapes public services in ways that were unthinkable a decade ago — from predicting energy demand to assessing social-benefit claims. The OECD warns that such systems cannot be left unchecked. It calls for flexible oversight frameworks, more expertise inside government, and regular reviews of how AI models operate.
At the same time, the paper acknowledges that strict rules alone will not guarantee progress. Countries are encouraged to support experimentation through “sandbox” environments, where new applications can be tested under supervision before being rolled out more widely.
The goal, according to the OECD, is not to slow down AI development but to ensure it remains consistent with democratic values and human rights. Governments should be able to trace how decisions are made by algorithms, correct errors, and offer citizens the right to challenge automated outcomes.
Connection to the European agenda
The OECD’s proposals arrive as Europe prepares to implement its own AI Act — the first broad legal framework for the technology in the region. Both approaches share an emphasis on risk-based management and transparency, though the OECD’s guidance is designed to be global rather than region-specific.
Brussels is now encouraging member states to set up national testing environments and certification systems for high-risk AI applications, while also building networks of regulators and experts. The OECD’s report complements these efforts by offering a broader playbook that can be adopted by countries outside the EU as well.
Czech Republic: catching up on capacity
In the Czech Republic, the debate over AI has largely focused on industrial and research policy, but public-sector adoption is still in early stages. Analysts say ministries and agencies will need to strengthen their technical teams and develop clearer oversight roles once the EU Act comes into force.
The OECD’s framework could help Czech authorities map out how to coordinate between data-protection officials, competition regulators, and the ministries that already use AI in transport or digital services. It also highlights the importance of citizen engagement — ensuring that residents understand when automated tools influence official decisions.
Poland: balancing speed and safeguards
Poland faces a similar challenge. The country has backed digital-transformation projects across sectors, yet many state agencies lack trained staff to monitor AI use or evaluate vendor systems. Policymakers are also debating how quickly to apply the new EU rules, warning that small firms may struggle with compliance costs.
For Poland, the OECD’s recommendations underline the importance of investing in public-sector skills and building institutions that can audit, test, and explain AI systems. The report also points to the social side of governance — making sure automation does not deepen inequality or restrict access to services.
A call for collective effort
Across Europe, governments are coming to terms with the reality that AI is no longer confined to laboratories or private tech firms. It now forms part of core state functions, from tax administration to environmental monitoring. The OECD argues that without clear governance, these technologies risk eroding the very accountability that democratic institutions are built on.
Its new framework serves as both a warning and a guide: that AI’s promise can only be realised if countries move beyond pilot projects and build the legal, institutional, and ethical foundations to manage it responsibly.
Source: OECD