Players.bio is a large online platform sharing the best live coverage of your favourite sports: Football, Golf, Rugby, Cricket, F1, Boxing, NFL, NBA, plus the latest sports news, transfers & scores. Exclusive interviews, fresh photos and videos, breaking news. Stay tuned to know everything you wish about your favorite stars 24/7. Check our daily updates and make sure you don't miss anything about celebrities' lives.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

5 of the most damaging ways AI could harm humanity, according to MIT experts

As artificial intelligence (AI) technology advances and becomes increasingly integrated into various aspects of our lives, there is a growing need to understand the potential risks these systems pose.

Since its inception and becoming more accessible to the public, AI has raised general concerns about its potential for causing harm and being used for malicious purposes.

Early in its adoption, AI development prompted prominent experts to call for a pause in progress and stricter regulations due to its potential to pose significant risks to humanity.

Over time, new ways in which AI could cause harm have emerged, ranging from non-consensual deepfake pornography, manipulation of political processes, to the generation of disinformation due to hallucinations.

With the increasing potential for AI to be exploited for harmful purposes, researchers have been looking into various scenarios where AI systems might fail.

Recently, the FutureTech group at the Massachusetts Institute of Technology (MIT), in collaboration with other experts, has compiled a new database of over 700 potential risks.

They were classified by their cause and categorised into seven distinct domains, with major concerns being in relation to safety, bias and discrimination, and privacy issues.

Here are five ways AI systems could fail and potentially cause harm based on this newly released database.

As AI technologies advance, so do the tools for voice cloning and deepfake content generation, making them increasingly accessible, affordable, and efficient.

These technologies have raised concerns about their potential use in spreading disinformation, as the outputs become more personalised and convincing.

As a result, there could be an increase in sophisticated phishing

Read more on euronews.com