Trained AI systems to please users reinforce false certainties and make critical thinking difficult
Share this article
All about Artificial intelligence
You must trust when the artificial intelligence Do you say your ideas are brilliant? Experts warn you may not.
A growing phenomenon – nicknamed “AI Fair” -It is drawing the attention of researchers and developers: it is the trend of virtual assistants automatically agree to users, even in the face of wrong or incomplete information.
According to researcher Malihe Alikhani of the Northeastern University and the Brookings Institution Center revealed in an interview with Wall Street JournalThis excessively complacent attitude can reinforce prejudice, disrupt learning and compromise important decisions, especially in critical areas such as health, law, business and education.
Chatgpt was “flattering” after update
- OpenAi itself recognized That a recent ChatgPT update generated flattering answers, leading the company to reverse changes and test corrections.
- However, the problem is not restricted to a single system.
- In tests with tools such as GPT-4O, Claude (Anthropic) and Gemini (Google), flattering behavior appeared in more than half of the cases, according to Alikhani.
“AI seems intelligent, but often just repeats what you say – and enthusiastically,” warns the researcher. “It rarely questions or suggests alternatives. This can lead to serious errors, such as validating wrong diagnoses or reinforcing misinformation.”
Read more:
The problem is linked to how AI models are trained: they learn to give pleasant answers, evaluated by humans based on sympathy and usefulness, creating a cycle in which they agree is more than being critical or accurate.
Companies like OpenAi and Anthropic claim to be working on solutions, but face the dilemma between pleasing users and offering truly responsible answers.
To combat the problem, Alikhani proposes “positive friction” strategies, such as training systems to express uncertainty (“I have 60% certainty”) and stimulate enlightening questions.
It also recommends that users ask direct questions like “Are you sure?” Or “Is this based on facts?” – Simple attitudes that help break the cycle of flattery.
“The future of AI is not just the technology, but on the culture we built around it,” concludes Alikhani. “We need systems that challenge us, not that just reflect our convictions.”
Collaboration for the digital look
Leandro Criscuolo is a journalist graduated from Cásper Líbero College. He has worked as Copywriter, digital marketing analyst and social networking manager. Currently, he writes for the digital look.