Players.bio is a large online platform sharing the best live coverage of your favourite sports: Football, Golf, Rugby, Cricket, F1, Boxing, NFL, NBA, plus the latest sports news, transfers & scores. Exclusive interviews, fresh photos and videos, breaking news. Stay tuned to know everything you wish about your favorite stars 24/7. Check our daily updates and make sure you don't miss anything about celebrities' lives.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

New AI models are more likely to give a wrong answer than admit they don't know

Newer large language models (LLMs) are less likely to admit they don’t know an answer to a user’s question making them less reliable, according to a new study. 

Artificial intelligence (AI) researchers from the Universitat Politècnica de València in Spain tested the latest versions of BigScience’s BLOOM, Meta’s Llama, and OpenAI's GPT for accuracy by asking each model thousands of questions on maths, science, and geography. 

Researchers compared the quality of the answers of each model and classified them into correct, incorrect, or avoidant answers.

The study, which was published in the journal Nature, found that accuracy on more challenging problems improved with each new model. Still, they tended to be less transparent about whether they could answer a question correctly. 

The earlier LLM models would say they could not find the answers or needed more information to come to an answer, but new models were more likely to guess and produce incorrect responses even to easy questions.  

LLMs are deep learning algorithms that use AI to understand, predict, and generate new content based on data sets. 

While the new models could solve more complex problems with more accuracy, the LLMs in the study still made some mistakes when answering basic questions.

"Full reliability is not even achieved at very low difficulty levels," according to the research paper.

"Although the models can solve highly challenging instances, they also still fail at very simple ones".

This is the case with OpenAI’s GPT-4, where the number of "avoidant" answers significantly dropped off from its previous model, GPT-3.5. 

“This does not match the expectation that more recent LLMs would more successfully avoid answering outside their operating range,” the study

Read more on euronews.com
DMCA