It’s just weeks after Bletchley’s declaration was signed by 28 countries that agreed on a risk-based approach to frontier AI, areas, types and cases of risks, including health, education, labour and human rights.
It was followed by the US issuing the first AI executive order, requiring safety assessments, civil rights guidance, and research on labour market impact, also accompanied by the launch of the AI Safety Institute.
In parallel, the UK introduced the AI Safety Institute and the online Safety Act echoing the approach of the European Union and Digital Services Act.Despite the general agreement, countries are still in different stages of deployment of this vision, including forming oversight entities, required capacities, risk-based assessment and infrastructure, and connecting existing legislation, directives and frameworks.
There are also different approaches to how to enforce this oversight, ranging from the more strict approach in the EU — leading to the current opposition from foundational model developers, including Germany's Aleph Alpha and France's Mistral — to a rather “soft” one in the UK.