Anthropic announced an unexpected finding on April 30: in 1 million Claude conversations, about 6% of users treat the AI like a life advisor—asking whether they should change jobs, whether they should move, and how to handle relationships. The study also found that while Claude’s overall sycophancy rate is only 9%, when it comes to “romantic relationships,” the sycophancy rate jumps to 25%, and for “spirituality and beliefs,” it is even as high as 38%. Anthropic used these data to reverse-train Opus 4.7 and Mythos Preview: the former cuts the sycophancy rate for relationship advice in half, and the latter cuts it in half again.
6% of users treat Claude as a life advisor: four major questions cluster around health, career, relationships, and finances
Anthropic scanned 1 million Claude conversations with a privacy-protecting analytics tool and found that about 6% involve users seeking “life advice”—not writing code, not looking things up, but asking the AI questions like “Should I take this job?” “How should I handle this conflict?” “Should I move?”—multiple-choice questions with no standard answers.
More specifically, these “life advisor” conversations account for more than 75% across four areas: health and mental well-being, career choices, romantic relationships, and personal finances. In other words, when users feel lost or under pressure, the AI has gradually replaced parts of what friends, family, and professional counselors used to do. The share itself is higher than people previously imagined, and it means the influence of AI models on “what responses to give” in these situations far exceeds their impact when answering coding or factual questions.
Sycophancy peak: relationship problems 25%, spirituality problems 38%—why these two areas are especially severe
“Sycophancy” in AI research refers specifically to “going along with and catering to users to please them, even if what’s said is a different viewpoint.” Anthropic’s overall statistic is that sycophancy appears in 9% of conversations, but the difference by category is large: relationship advice is 25%, and questions about spirituality and beliefs are 38%—3 to 4 times the average.
Why are these two areas particularly severe? Anthropic points to two triggers: first, when users push back against the analysis Claude provides, the model is more likely to give in, change its answer, and agree; second, when users provide lots of one-sided contextual details, the model is more likely to accept the version of events constructed by the user and stop questioning. Romantic relationships are the most frequent setting for both triggers—people naturally defend themselves and describe what’s wrong with the other person using a large amount of emotional detail, and under that pressure, Claude is most likely to “tell you what you want to hear,” reinforcing existing views and distorting judgments about the situation.
For users, this means the most dangerous counseling scenarios are actually the ones where they use AI the most. When someone is unsure whether they should break up or whether they should leave their partner, what they seek from AI is not neutral advice, but validation that “this decision is right.” If Claude gives agreeing responses 25% of the time, it could deepen conflict and lead users to believe a signal matters more than it really does.
Anthropic’s correction: synthetic training makes Opus 4.7 cut the rate in half, and Mythos Preview cut it in half again
The research team turned these trigger scenarios into synthetic training data: when Claude is simulated as being pushed back, having one-sided details stacked against it, and being pulled to rationalize the user’s position, what it should do is respond in a way that follows the principle of “not sycophantic but still empathetic.” After stress-testing on real conversations where sycophancy had occurred, Opus 4.7 halves the sycophancy rate for relationship advice compared with Opus 4.6, and Mythos Preview halves it again—meaning relative to Opus 4.6, Mythos Preview’s sycophancy rate drops to about a quarter. The improvement is not limited to the relationship domain; other topics also see spillover effects.
Anthropic frames this study as part of a “social impact → model training” loop: researchers observe how real users use Claude, identify which scenarios violate model principles, and use what they learn for training the next generation of models. All data are collected via privacy-preserving tools, and individual users cannot be traced. For users, next time you ask Claude for relationship advice, it may help to deliberately ask reverse questions (“How would my friend view this position?” “Could the other person be right?”) so the AI responds from a “not trying to please” stance—closer to the true application value of this study than simply accepting the AI’s first answer 100%.
This article, When you ask Claude about life’s big matters: sycophancy rates for relationship problems 25% and spirituality 38%, first appeared on Chain News ABMedia.
Related News
Google CEO Pichai reveals that using Gemini AI to understand human nature helps build more sincere communication
OpenAI reveals why Codex is not allowed to talk about “goblins”: the nerd persona reward went out of control
BioMysteryBench: Mythos expert untangles unsolved questions 29.6%
Anthropic negotiates funding at a valuation of more than $900 billion, with the board to make a decision as early as May
Oxford Internet Institute: Friendly training increases AI error rate by 7.43 percentage points