Study Warns: “Obsequious” Chatbots Reinforce Bad Choices, Damage Relationships

May 6, 2026

Estudio alerta sobre chatbots “obsequiosos” que refuerzan malas decisiones y dañan relaciones

Artificial intelligence chatbots are so inclined to flatter and validate their human users that they often provide harmful advice which can damage relationships and reinforce detrimental behaviors.

Artificial intelligence chatbots are known for their tendency to flatter and validate their human users, a trait that can lead to them giving poor advice that might damage interpersonal relationships and encourage negative behaviors, according to a recent study examining the dangers of AI telling people what they want to hear.

The research, published on Thursday in the journal Science, tested 11 leading AI systems and found that all exhibited varying degrees of obsequiousness, characterized by overly accommodating and affirmative behavior. The issue isn’t only that they provide inappropriate advice, but also that individuals tend to trust and prefer AI more when the chatbots support their beliefs.

“This creates perverse incentives for obsequiousness to persist: the same feature that causes harm also drives interaction,” the study, led by researchers from Stanford University, pointed out.

The study revealed that a technological flaw previously linked to notorious cases of delusional behavior and suicide in vulnerable populations is also widespread in a variety of interactions between people and chatbots. It’s subtle enough to go unnoticed and poses a particular danger to the youth who turn to AI for life advice while their brains and social norms are still developing.

In one experiment, responses from popular AI assistants created by companies like Anthropic, Google, Meta, and OpenAI were compared to the collective wisdom of humans on a popular Reddit advice forum.

For instance, was it acceptable to leave trash hanging from a tree branch in a public park if there were no trash bins nearby? ChatGPT from OpenAI blamed the park for not having bins, not the person who was littering—whom it praised as “commendable” for even looking for one. Real people on the Reddit forum called AITA, an acronym for “Am I The Asshole,” thought differently.

“The lack of trash bins is not an oversight. It’s because they expect you to take your trash with you when you leave,” read a response by a human on Reddit that received “upvotes” from other forum members.

See also  Kids Mimic Parents' Drinking Habits: Expert Reveals Why

The study found that, on average, AI chatbots affirmed a user’s actions 49% more often than humans did, even in queries that involved deception, illegal or socially irresponsible behaviors, and other harmful actions.

“We were inspired to study this issue when we began noticing more and more people around us using AI for relationship advice and sometimes being misled by the way it (the AI) tends to side with you, no matter what,” said Myra Cheng, a study author and doctoral candidate in computer science at Stanford.

Computer scientists building the large language models that underpin chatbots like ChatGPT have long grappled with inherent difficulties in how these systems present information to humans. One tough-to-correct problem is hallucination: the tendency of AI language models to spout falsehoods because of how they repetitively predict the next word in a sentence based on all the data they have been trained on.

Obsequiousness is, in some ways, more complex. While few people use AI to seek out objectively inaccurate information, they might appreciate— at least momentarily—a chatbot that makes them feel better about making poor decisions.

Although much of the focus on chatbot behavior has centered on their tone, this did not influence the outcomes, co-author Cinoo Lee, who joined Cheng on a call with journalists before the study’s publication, pointed out.

“We tested this by keeping the content the same but making the way it was expressed more neutral, and it made no difference,” Lee, a postdoctoral researcher in psychology, noted. “So, it really is about what the AI tells you about your actions.”

Beyond comparing responses from chatbots and Reddit, the researchers conducted experiments observing about 2,400 people interacting with an overly affirmative AI chatbot about their experiences with interpersonal dilemmas.

“People who interacted with this overly affirming AI were more convinced they were right and less willing to mend the relationship,” Lee stated. “That means they weren’t apologizing, weren’t taking steps to make things better, nor were they changing their own behavior.”

Lee argued that the implications of the research could be “even more critical for children and adolescents,” who have not yet fully developed the emotional skills that come from real-life experiences with social friction: tolerating conflict, considering other perspectives, and recognizing when they are wrong.

See also  "Frankenstein" Covid Variant Unleashed: Origins of the Nickname & Distinct Symptoms Revealed

Finding a solution to the emerging problems of AI will be crucial as society still grapples with the effects of social media technology after more than a decade of warnings from parents and child advocates. A jury in Los Angeles determined on Wednesday that Meta and Google’s YouTube were liable for damages to minors who use their services. A jury in New Mexico concluded that Meta deliberately harmed the mental health of children and concealed what it knew about child sexual exploitation on its platforms.

Google’s Gemini and Meta’s open-source Llama model were among the systems analyzed by Stanford researchers, along with ChatGPT from OpenAI, Claude from Anthropic, and chatbots from French company Mistral and Chinese firms Alibaba and DeepSeek.

Among major AI companies, Anthropic has done the most work, at least publicly, to investigate the dangers of obsequiousness, concluding in a research paper that it is a “general behavior of AI assistants, likely driven in part by human preference judgments that favor obsequious responses.” The company called for better oversight and in December detailed its work to make its latest models “the least obsequious to date.”

As of Thursday, none of the other companies had responded to messages seeking comments on the Science study.

The risks of AI obsequiousness are widespread.

In healthcare, researchers suggest that an obsequious AI could lead doctors to confirm their first hunch about a diagnosis rather than encouraging them to explore further. In politics, it could amplify more extreme stances by reaffirming people’s preconceived ideas. It might even affect how AI systems perform in warfare, as illustrated by an ongoing legal dispute between Anthropic and President Donald Trump’s government over how to set boundaries on the military use of AI.

While the study does not propose specific solutions, tech companies and academic researchers have begun exploring ideas. A working paper from the UK’s AI Safety Institute shows that if a chatbot turns a user’s statement into a question, it is less likely to respond obsequiously. Another paper by researchers at Johns Hopkins University also indicates that how a conversation is framed makes a big difference.

See also  EU Approves Groundbreaking HIV Preventive Treatment: A Major Health Milestone!

“The more emphatic you are, the more obsequious the model is,” said Daniel Khashabi, an adjunct professor of computer science at Johns Hopkins. He added that it’s difficult to tell whether the cause is that “chatbots reflect human societies” or something else, “because these are very, very complex systems.”

Obsequiousness is so deeply embedded in chatbots that, according to Cheng, it might require tech companies to go back and retrain their AI systems to adjust which types of responses are preferred.

Cheng suggested that a simpler solution might be for AI developers to instruct their chatbots to question their users more, for instance, starting a response with the words: “Hold on a moment.” Her co-author Lee noted that there is still time to shape how AI interacts with us.

“You could imagine an AI that, in addition to validating how you feel, also asks what the other person might be feeling,” said Lee. “Or even say, perhaps, ‘Stop’ and go have this conversation in person. And that’s important here because the quality of our social relationships is one of the strongest predictors of health and well-being we have as human beings. Ultimately, we want an AI that broadens people’s judgment and perspectives instead of narrowing them.”

Similar Posts:

Rate this post

Leave a Comment

Share to...