Players.bio is a large online platform sharing the best live coverage of your favourite sports: Football, Golf, Rugby, Cricket, F1, Boxing, NFL, NBA, plus the latest sports news, transfers & scores. Exclusive interviews, fresh photos and videos, breaking news. Stay tuned to know everything you wish about your favorite stars 24/7. Check our daily updates and make sure you don't miss anything about celebrities' lives.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

Euroviews. Can emerging AI strategies protect people with disabilities and other vulnerable groups?

It’s just weeks after Bletchley’s declaration was signed by 28 countries that agreed on a risk-based approach to frontier AI, areas, types and cases of risks, including health, education, labour and human rights. 

It was followed by the US issuing the first AI executive order, requiring safety assessments, civil rights guidance, and research on labour market impact, also accompanied by the launch of the AI Safety Institute. 

In parallel, the UK introduced the AI Safety Institute and the online Safety Act echoing the approach of the European Union and Digital Services Act.

Despite the general agreement, countries are still in different stages of deployment of this vision, including forming oversight entities, required capacities, risk-based assessment and infrastructure, and connecting existing legislation, directives and frameworks. 

There are also different approaches to how to enforce this oversight, ranging from the more strict approach in the EU — leading to the current opposition from foundational model developers, including Germany's Aleph Alpha and France's Mistral — to a rather “soft” one in the UK. 

There are even bigger questions related to specific and high-risk areas that require more attention such as policing, justice and legal systems, health, education, and designated groups.

This is particularly important for groups such as individuals with disabilities, children, and vulnerable populations. 

For instance, it’s known that many legal AI systems were trained without the participation of specific populations, leading to higher errors against them. In some countries, governmental agencies were accused of using data from social media without consent to confirm patients’ disability status for pension programs. 

Immigr

Read more on euronews.com
DMCA