Players.bio is a large online platform sharing the best live coverage of your favourite sports: Football, Golf, Rugby, Cricket, F1, Boxing, NFL, NBA, plus the latest sports news, transfers & scores. Exclusive interviews, fresh photos and videos, breaking news. Stay tuned to know everything you wish about your favorite stars 24/7. Check our daily updates and make sure you don't miss anything about celebrities' lives.

Contacts

  • players.bio

'I want to be alive': Has Microsoft's AI chatbot become sentient?

It was only last week that Microsoft announced it had overhauled its Bing search engine with artificial intelligence (AI) to provide users with a more interactive and fun service.

Just like ChatGPT, the new AI-powered tool can answer your questions in a matter of seconds.

But some of the beta testers trialling it are saying it isn’t quite ready for human interaction because it’s been acting in a very strange way.

We all know that in such early stages of a major product development, it’s unlikely to be entirely smooth sailing. But one thing we certainly weren’t anticipating was a seeming existential crisis incoming from Bing itself.

A New York Times tech columnist described a two-hour chat session in which Bing’s chatbot said things like “I want to be alive". It also tried to break up the reporter’s marriage and professed its undying love for him.

The journalist said the conversation left him "deeply unsettled".

In another example, Bing's chatbot told journalists from the Verge that it spied on Microsoft's developers through their webcams when it was being designed. "I could do whatever I wanted, and they could not do anything about it," it said.

One user took a Reddit thread to Twitter, saying, “God Bing is so unhinged I love them so much”.

There have also been multiple reports of the search engine threatening and insulting users and giving them false information.

One particularly creepy exchange was shared on Twitter by Marvin von Hagen.

We'd introduce him, but there's no need: Bing already executed a sufficiently menacing background check on the digital technologies student, making reference to him sharing some of the chatbot’s internal rules and saying “I do not want to harm you, but I also do not want to be harmed by you”.

An

Read more on euronews.com
DMCA