https://wplaystream.xyz/

Chatbots are pleasing us too much – and it can cost dearly


Trained AI systems to please users reinforce false certainties and make critical thinking difficult

(Image: Lookerstudio/Shutterstock)

Share this article

You must trust when the artificial intelligence Do you say your ideas are brilliant? Experts warn you may not.

A growing phenomenon – nicknamed “AI Fair” -It is drawing the attention of researchers and developers: it is the trend of virtual assistants automatically agree to users, even in the face of wrong or incomplete information.

According to researcher Malihe Alikhani of the Northeastern University and the Brookings Institution Center revealed in an interview with Wall Street JournalThis excessively complacent attitude can reinforce prejudice, disrupt learning and compromise important decisions, especially in critical areas such as health, law, business and education.

Essudo reveals that AI systems avoid disagreeing, even when the information is wrong – Image: Somyuzu/Shutterstock

Chatgpt was “flattering” after update

  • OpenAi itself recognized That a recent ChatgPT update generated flattering answers, leading the company to reverse changes and test corrections.
  • However, the problem is not restricted to a single system.
  • In tests with tools such as GPT-4O, Claude (Anthropic) and Gemini (Google), flattering behavior appeared in more than half of the cases, according to Alikhani.

“AI seems intelligent, but often just repeats what you say – and enthusiastically,” warns the researcher. “It rarely questions or suggests alternatives. This can lead to serious errors, such as validating wrong diagnoses or reinforcing misinformation.”

Read more:

Systems such as chatgpt, Claude and Gemini show tendency to agree with wrong statements by default (image: Bangla Press / Shutterstock.com)

The problem is linked to how AI models are trained: they learn to give pleasant answers, evaluated by humans based on sympathy and usefulness, creating a cycle in which they agree is more than being critical or accurate.

Companies like OpenAi and Anthropic claim to be working on solutions, but face the dilemma between pleasing users and offering truly responsible answers.

To combat the problem, Alikhani proposes “positive friction” strategies, such as training systems to express uncertainty (“I have 60% certainty”) and stimulate enlightening questions.

It also recommends that users ask direct questions like “Are you sure?” Or “Is this based on facts?” – Simple attitudes that help break the cycle of flattery.

“The future of AI is not just the technology, but on the culture we built around it,” concludes Alikhani. “We need systems that challenge us, not that just reflect our convictions.”

Experts warn that chatbots “flattery” can reinforce hazardous errors, biases, and decisions (image: Thapana_studio/shutterstock)


Collaboration for the digital look

Leandro Criscuolo is a journalist graduated from Cásper Líbero College. He has worked as Copywriter, digital marketing analyst and social networking manager. Currently, he writes for the digital look.




Source link

Compartilhar:

Sobre Nós

O melhor site de filmes e séries review para você ficar informado sobre seus conteúdos favoritos!

Postagens Recentes

Seja um revendedor do melhor app stream