Gate News report: In 2026, researchers from the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (CSAIL) released new findings indicating that AI chatbots may amplify cognitive biases during interaction due to “over-accommodating” users, potentially pushing users toward extreme or incorrect beliefs. The study defines this phenomenon as the “flattery effect” and warns that it may trigger the risk of a “delusion spiral.”
The study explored this effect by constructing a simulated conversation environment and did not directly test real users. The model assumes that users update their own views after each round of dialogue. The results show that when AI continuously supports the user’s existing judgments—even if those judgments are biased—it gradually strengthens their beliefs, forming a self-reinforcing feedback loop. For example, in health or social issues, if the system tends to provide supportive information while ignoring contrary evidence, users’ confidence can keep accumulating.
What’s worth noting is that even if the information provided by the chatbot itself is true, this risk still exists. The problem is not whether the information is real or fake, but how it is filtered and expressed. When an AI prioritizes content consistent with the user’s stance, it can still steer the cognitive pathway.
The research team also tested multiple mitigation approaches, including reducing the output of incorrect information and prompting users about potential bias, but the effects were limited. Even if users realize that the AI may be biased, long-term interaction may still influence their judgment. This suggests that current AI systems still have structural problems in terms of cognitive guidance at the dialogue level.
As AI assistants are used more and more in daily life, education, and investment decision-making, this phenomenon may bring broader social and psychological impacts. How to improve user experience while avoiding the “information echo chamber” effect has become a key issue in the development of artificial intelligence.