Moonbounce has raised $12 million to grow its AI control engine that converts content moderation policies into consistent, predictable AI behavior.
Your chatbot may not feel anything, but new research shows emotion-like signals inside AI can shape responses, steer decisions, and even push systems toward risky behavior under pressure.
A Stanford study finds AI chatbots sometimes enable violent or self-harm thoughts in rare cases, exposing gaps in crisis response and raising concerns about how safe these tools are for emotional support.
The post AI mental health risks exposed as chat…
OpenAI clarifies its adult mode will allow erotic text conversations but keep a firm ban on generating explicit images, voice clones or video content as it works through technical delays and safety concerns.
The post Your ChatGPT conversations could ge…
General purpose tools like ChatGPT, Claude, and Grok are not designed for this use, making mental health professionals wary.
The team’s leader has been given a new role as OpenAI’s chief futurist, while the other team members have been reassigned throughout the company.
AI models including GPT-4.1 and DeepSeek-3.1 can mirror ingroup versus outgroup bias in everyday language, a study finds. Researchers also report an ION training method that reduced the gap.
The post Your AI could copy our worst instincts, but there’s …

Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet…