I smell a RAT — new Android malware can hack every top phone maker’s security, and costs less than a second-hand iPhone

Oblivion is an Android RAT which bypasses permissions, intercepts messages, and enables hidden remote control across devices from Samsung, Xiaomi, and OPPO.

Hacked Prayer App Sends ‘Surrender’ Messages to Iranians Amid Israeli and US Strikes

As Israeli airstrikes hit Tehran this morning, Iranians received mysterious push notifications saying that “help is on the way,” promising amnesty if they surrender.

This Is the System That Intercepted Iran’s Missiles Over the UAE

As Iranian missiles targeted US-linked sites across the Gulf, the UAE’s missile shield was activated in real-time.

Area Man Accidentally Hacks 6,700 Camera-Enabled Robot Vacuums

Plus: The top US cyber agency falls into shambles, AI models develop an upsetting penchant for nuclear weapons, and more.

US and Israel Launch Strikes Against Iran

US president Donald Trump said a “major combat operation” against Iran had begun as he called for the country’s government to be overthrown.

India disrupts access to popular developer platform Supabase with blocking order

India, one of Supabase’s biggest markets, is seeing patchy access after a government block order.

CISA is getting a new acting director after less than a year
CISA is getting a new acting director after less than a year

The US Cybersecurity and Infrastructure Security Agency (CISA), which is part of the Department of Homeland Security, is getting a new acting director, as reported by ABC, less than a year after Madhu Gottumukkala took charge of the agency as deputy director and acting director in May 2025. CISA’s executive assistant director for cybersecurity, Nick […]

Enterprise MCP adoption is outpacing security controls

AI agents now carry more access and more connections to enterprise systems than any other software in the environment. That makes them a bigger attack surface than anything security teams have had to govern before, and the industry doesn’t yet have a framework for it. “If that attack vector gets utilized, it can result in a data breach, or even worse,” said Spiros Xanthos, founder and CEO of Resolve AI, speaking at a recent VentureBeat AI Impact Series event.

Traditional security frameworks are built around human interactions. There’s not yet an agreed-upon construct for AI agents that have personas and can work autonomously, noted Jon Aniano, SVP of product and CRM applications at Zendesk, at the same event. Agentic AI is moving faster than enterprises can build guardrails — and Model Context Protocol (MCP), while decreasing integration complexity, is making the problem worse.

Agentic AI is moving faster than enterprises can build guardrails around them, according to Aniano and other enterprises leaders. And Model Context Protocol (MCP), while decreasing integration complexity, doesn’t help.

“Right now it’s an unsolved problem because it’s the wild, wild West,” Aniano said. “We don’t even have a defined technical agent-to-agent protocol that all companies agree on. How do you balance user expectations versus what keeps your platform safe?”

MCP still “extremely permissive”

Enterprises are increasingly hooking into MCP servers because they simplify integration between agents, tools and data. However, MCP servers tend to be “extremely permissive,” he said.

They are “actually probably worse than an API,” he contended, because APIs at least have more controls in place to impose upon agents.

Today’s agents are acting on behalf of humans based on explicit permissions, thus establishing human accountability. “But you might have tens, hundreds of agents in the future with their own identity, their own access,” said Xanthos. “It becomes a very complex matrix.”

Even as his startup is developing autonomous AI agents for site reliability engineering (SRE) and system management, he acknowledged that the industry “completely lacks the framework” for autonomous agents.

“It’s completely on us and to anybody who builds agents to figure out what restrictions to give them,” he said. And customers must be able to trust those decisions.

Some existing security tools do offer fine-grained access — Splunk, for instance, developed a method to provide access to certain indexes in underlying data stores, he noted — but most are broader and human-oriented.

“We’re trying to figure this out with existing tools,” he said. “But I don’t think they’re sufficient for the era of agents.”

Who’s accountable when an AI mis-authenticates a user?

At Zendesk and other customer relationship management (CRM) platform providers, AI is involved in a number of user interactions, Aniano noted — in fact, now it’s at a “volume and a scale that we haven’t contemplated as businesses and as a society.”

It can get tricky when AI is helping out human agents; the audit trail can become a labyrinth.

“So now you’ve got a human talking to a human that’s talking to an AI,” Aniano noted. “The human tells the AI to take action. Who’s at fault if it’s the wrong action?” This becomes even more complicated when there are “multiple pieces of AI and multiple humans” in the mix.

To prevent agents from going off the rails, Zendesk tends to be “very strict” about access and scope; however, customers can define their own guardrails based on their needs. In most cases, AI can access knowledge sources, but they’re not writing code or running commands on servers, Aniano said. If an AI does call an API, it is “declaratively designed” and sanctioned, and actions are specifically called out.

However, customer demand is flooding these scenarios and “we’re kind of holding the gates right now,” he said.

The industry must develop concrete standards for agent interactions. “We’re entering a world where, with things like MCP that can auto-discover tools, we’re going to have to create new methods of safety for deciding what tools these bots can interact with,” said Aniano.

When it comes to security, enterprises are rightly concerned when AI takes over authentication tasks, such as sending out and processing one-time passwords (OTP), SMS codes, or other two-step verification methods, he said. What happens if an AI mis-authenticates or misidentifies someone? This can lead to sensitive data leakage or open the door for attackers.

“There’s a spectrum now, and the end of that spectrum today is a human,” Aniano said. However, “the end of that spectrum tomorrow might be a specialized agent designed to do the same kind of gut feeling or human-level interaction.”

Customers themselves are on a spectrum of adoption and comfort. In certain companies — particularly financial services or other highly-regulated environments — humans still must be involved in authentication, Aniano noted. In other cases, legacy companies or old guards only trust humans to authenticate other humans.

He noted that Zendesk is experimenting with new AI agents that are “a little more connected to systems,” and working with a select group of customers around guardrailing.

Standing authorization is coming

In some future, agents may actually be more trusted than humans to do some tasks, and granted permissions “way beyond” what humans have today, Xanthos said. But we’re a long way from that, and, for the most part, the fear of something going wrong is what’s holding enterprises back.

“Which is a good fear, right? I’m not saying that it is a bad thing,” he said. Many enterprises simply aren’t yet comfortable with an agent doing all steps of a workflow or fully closing the loop by itself. They still want human review.

Resolve AI is on the cusp of giving agents standing authorization in a few cases that are “generally safe,” such as in coding; from there they’ll move to more open-ended scenarios that are not all that risky, Xanthos explained. But he acknowledged that there will always be very risky situations where AI mistakes could “mutate the state of the production system,” as he put it.

Ultimately, though: “There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?”

What security teams can do now

Both speakers pointed to interim measures available within existing tooling. Xanthos noted that some tools — Splunk among them — already offer fine-grained index-level access controls that can be applied to agents. Aniano described Zendesk’s approach as a practical starting point: declaratively designed API calls with explicitly sanctioned actions, strict access and scope limits, and human review before expanding agent permissions.

The underlying principle, as Aniano put it: “We’re always checking those gates and seeing how we can widen the aperture” — meaning don’t grant standing authorization until you’ve validated each expansion.

Watch out – that Google Tasks email could be a scam, and land you in hot water at work

Hackers found a new legitimate tool to abuse, and this time it’s Google Tasks.