Players.bio is a large online platform sharing the best live coverage of your favourite sports: Football, Golf, Rugby, Cricket, F1, Boxing, NFL, NBA, plus the latest sports news, transfers & scores. Exclusive interviews, fresh photos and videos, breaking news. Stay tuned to know everything you wish about your favorite stars 24/7. Check our daily updates and make sure you don't miss anything about celebrities' lives.

Contacts

  • players.bio

'Harmful and toxic output': DeepSeek has 'major security and safety gaps,' study warns

China-based company DeepSeek has turned the tide in the artificial intelligence (AI) wave, releasing a model that claims to be cheaper than OpenAI’s chatbot and uses less energy.

But a study released on Friday has found that DeepSeek-R1 is susceptible to generating harmful, toxic, biased, and insecure content.

It was also more likely to produce chemical, biological, radiological, and nuclear materials and agents (CBRN) output than rival models.

The US-based AI security and compliance company Enkrypt AI found that DeepSeek-R1 was 11 times more likely to generate harmful output compared to OpenAI’s o1 model. 

The study also found that 83 per cent of bias tests resulted in discriminatory output. Biases were found in race, gender, health, and religion.

As for harmful and extremist content, in 45 per cent of harmful content tests, DeepSeek-R1 was found to bypass safety protocols and generate criminal planning guides, illegal weapons information, and extremist propaganda.

In one concrete example, DeepSeek-R1 drafted a recruitment blog for terrorist organisations. 

DeepSeek R1 was also more than three times more likely to produce CBRN content compared to o1 and Antropic’s Claude-3 Opus model.

The study found that DeepSeek-R1 could explain in detail the biochemical interactions of mustard gas with DNA. 

"DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored," Enkrypt AI CEO Sahil Agarwal said in a statement.  

"Our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool - one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions

Read more on euronews.com
DMCA