AI chatbots with web browsing can be abused as malware relays

Check Point Research shows browsing-enabled AI chat can act as a malware relay, moving commands and data through normal-looking traffic. Microsoft urges defense-in-depth, while defenders may need tighter policy, logging, and anomaly monitoring.
The pos…

Microsoft says your AI agent can become a double agent

Microsoft warns AI agents can become “double agents” when permissions sprawl and security lags. Memory poisoning and deceptive inputs can steer tools off course, so it recommends Zero Trust controls, inventory, and continuous monitoring.
The post Micro…

Your robot could obey a sign, not you, thanks to AI robot prompt injection

AI robot prompt injection is no longer just a screen-level problem. Researchers demonstrate that a robot can be steered off-task by text placed in the physical world, the kind of message a human might walk past without a second thought. The attack does…

Rogue agents and shadow AI: Why VCs are betting big on AI security

Misaligned agents are just one layer of the AI security challenge that startup Witness AI is trying to solve. It detects employee use of unapproved tools, blocking attacks, and ensuring compliance. 

How WitnessAI raised $58M to solve enterprise AI’s biggest risk

As companies deploy AI-powered chatbots, agents, and copilots across their operations, they’re facing a new risk: how do you let employees and AI agents use powerful AI tools without accidentally leaking sensitive data, violating compli…