Your chatbot may not feel anything, but new research shows emotion-like signals inside AI can shape responses, steer decisions, and even push systems toward risky behavior under pressure.
A Stanford study finds AI chatbots sometimes enable violent or self-harm thoughts in rare cases, exposing gaps in crisis response and raising concerns about how safe these tools are for emotional support.
The post AI mental health risks exposed as chat…

In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agr…

For a different perspective on AI companions, see ourQ&A with Jaime Banks: How Do You Define an AI Companion?Novel technology is often a double-edged sword. New capabilities come with new risks, and artificial intelligence is certainly no exception.AI…

AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support.In response, apps dedicated to provid…
A study finds an AI emotional connection can feel deeper than human chat in fast, personal exchanges, especially when users think it’s a person. When labeled AI, closeness and effort drop.
The post AI emotional connection can feel deeper than human tal…
Google’s AI is now even smarter, and more versatile.