China plans stricter rules for AI “digital humans,” balancing emotional use cases like grief support with concerns around consent, deception, and misuse.
App launches are surging globally as AI tools make it easier than ever for creators to build and publish mobile apps.
Is your AI assistant quietly sabotaging you? New research shows that chatbots are judging users based on rigid, mechanical logic, and the biases they find are stronger than our own.
Stanford researchers found AI chatbots often reinforce users’ views in personal conflicts, making people more certain they’re right while reducing empathy, raising concerns about relying on AI as a guide for real-world decisions.
AI has a habit of bluffing, and you’re not alone in catching it.
This new AI approach teaches chatbots to focus on emotionally important words and link them to the right subject, helping them better understand nuanced messages and respond more appropriately.
Memvid’s ‘AI Bully’ job turns your everyday chatbot frustration into paid work, where you stress test AI by pushing chatbots until they fail, exposing how often they forget context and repeat mistakes.
The post This startup will pay you $800 daily to w…
Alexa+ is getting customizable chat styles that change how the assistant talks to you, including a new Sassy personality with humor, sarcasm and censored profanity.
The post Amazon is bringing new chat styles to Alexa+ and one of them can be cuss, too …