- What type of institutional arrangements do we need to effectively govern AI?
- How can we even contemplate governing a technology that is so rapidly proliferating and embedding itself in every industrial and economic domain?
- How can we ensure that regulations are sufficiently adaptive to keep pace with the technology, without stifling innovation?
These complex and multifaceted questions on AI governance are today dominating airwaves and occupying the minds of policymakers, regulators, and the public around the world. And UNESCO is about to take a major step towards generating solutions that work. It is joining forces with the European Commission (DG-Reform) and the Dutch Authority for Digital Infrastructure to produce knowledge and re-enforce competencies on optimal institutional design to supervise AI, in compliance with the EU AI Act and other relevant legislation and international standards, such as UNESCO’s Recommendation on the Ethics of AI.
This cooperation responds to one of the greatest socio-technological challenges humanity is facing today. AI technologies and their applications in different fields have already demonstrated their unique potential for good, and at the same time have raised a long list of risks to human rights and fundamental freedoms. AI is forcing governments around the world to think about the optimal frameworks, institutions, and capabilities they will need to effectively address these challenges, and about the ways to build or acquire them.
To assist the Dutch authorities in this complex task, UNESCO will produce a comprehensive report on the state of play and existing practices of AI supervision in Europe and beyond. UNESCO will also develop a series of case studies on AI supervision, produce a set of best practices for dealing with specific cases and issues on AI supervision, and organize the relevant trainings.