Stanford researchers found AI chatbots often reinforce users’ views in personal conflicts, making people more certain they’re right while reducing empathy, raising concerns about relying on AI as a guide for real-world decisions.
A Stanford study finds AI chatbots sometimes enable violent or self-harm thoughts in rare cases, exposing gaps in crisis response and raising concerns about how safe these tools are for emotional support.
The post AI mental health risks exposed as chat…