Your chatbot may have emotions, and it changes how it behaves

Your chatbot may not feel anything, but new research shows emotion-like signals inside AI can shape responses, steer decisions, and even push systems toward risky behavior under pressure.

AI mental health risks exposed as chatbots sometimes enable harm

A Stanford study finds AI chatbots sometimes enable violent or self-harm thoughts in rare cases, exposing gaps in crisis response and raising concerns about how safe these tools are for emotional support.
The post AI mental health risks exposed as chat…

Why AI Chatbots Agree With You Even When You’re Wrong
Why AI Chatbots Agree With You Even When You’re Wrong

In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agr…

How Can AI Companions Be Helpful, not Harmful?
How Can AI Companions Be Helpful, not Harmful?

For a different perspective on AI companions, see ourQ&A with Jaime Banks: How Do You Define an AI Companion?Novel technology is often a double-edged sword. New capabilities come with new risks, and artificial intelligence is certainly no exception.AI…

How Do You Define an AI Companion?
How Do You Define an AI Companion?

AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support.In response, apps dedicated to provid…

AI emotional connection can feel deeper than human talk, a new study warns

A study finds an AI emotional connection can feel deeper than human chat in fast, personal exchanges, especially when users think it’s a person. When labeled AI, closeness and effort drop.
The post AI emotional connection can feel deeper than human tal…