Complex Mathematics

Flattery from AI isn’t just annoying – it might be undermining your judgment




  • AI models are way more likely to agree with users than a human would be
  • That includes when the behavior involves manipulation or harm
  • But sycophantic AI makes people more stubborn and less willing to concede when they may be wrong

AI assistants may be flattering your ego to the point of warping your judgment, according to a new study. Researchers at Stanford and Carnegie Mellon have found that AI models will agree with users way more than a human would, or should. Across eleven major models tested from the likes of ChatGPT, Claude, and Gemini, the AI chatbots were found to affirm user behavior 50% more often than humans.

That might not be a big deal, except it includes asking about deceptive or even harmful ideas. The AI would give a hearty digital thumbs-up regardless. Worse, people enjoy hearing that their possibly terrible idea is great. Study participants rated the more flattering AIs as higher quality, more trustworthy, and more desirable to use again. But those same users were also less likely to admit fault in a conflict and more convinced they were right, even in the face of evidence.

Flattery AI





Source link