The new Plugable USBC-10IN1E is a 10-in-1 USB-C hub with 140W Power Delivery, HDMI 2.1 support up to 8K and 2.5Gb high-speed Ethernet port for wired connections.
At this year’s NAB Show in Las Vegas, RØDE will be showing a range of new products including a breakthrough MEMs studion microphone and new podcasting software.
Samsung updates SmartThings with enhanced Family Care features, Aliro-certified smart lock support via Samsung Wallet, and the expansion of Now Brief AI summaries to TVs and fridges.
This article explores how autonomous science could speed up breakthroughs in medicine & materials, while raising urgent questions about safety, ethics and human oversight
The London Marathon is just days away, and Apple has announced star-studded free events, a Marathon playlist and more.
Salesforce on Wednesday unveiled the most ambitious architectural transformation in its 27-year history, introducing “Headless 360” — a sweeping initiative that exposes every capability in its platform as an API, MCP tool, or CLI command so AI agents can operate the entire system without ever opening a browser.
The announcement, made at the company’s annual TDX developer conference in San Francisco, ships more than 100 new tools and skills immediately available to developers. It marks a decisive response to the existential question hanging over enterprise software: In a world where AI agents can reason, plan, and execute, does a company still need a CRM with a graphical interface?
Salesforce’s answer: No — and that’s exactly the point.
“We made a decision two and a half years ago: Rebuild Salesforce for agents,” the company said in its announcement. “Instead of burying capabilities behind a UI, expose them so the entire platform will be programmable and accessible from anywhere.”
The timing is anything but coincidental. Salesforce finds itself navigating one of the most turbulent periods in enterprise software history — a sector-wide sell-off that has pushed the iShares Expanded Tech-Software Sector ETF down roughly 28% from its September peak. The fear driving the decline: that AI, particularly large language models from Anthropic, OpenAI, and others, could render traditional SaaS business models obsolete.
Jayesh Govindarjan, EVP of Salesforce and one of the key architects behind the Headless 360 initiative, described the announcement as rooted not in marketing theory but in hard-won lessons from deploying agents with thousands of enterprise customers.
“The problem that emerged is the lifecycle of building an agentic system for every one of our customers on any stack, whether it’s ours or somebody else’s,” Govindarjan told VentureBeat in an exclusive interview. “The challenge that they face is very much the software development challenge. How do I build an agent? That’s only step one.”
Salesforce Headless 360 rests on three pillars that collectively represent the company’s attempt to redefine what an enterprise platform looks like in the agentic era.
The first pillar — build any way you want — delivers more than 60 new MCP (Model Context Protocol) tools and 30-plus preconfigured coding skills that give external coding agents like Claude Code, Cursor, Codex, and Windsurf complete, live access to a customer’s entire Salesforce org, including data, workflows, and business logic. Developers no longer need to work inside Salesforce’s own IDE. They can direct AI coding agents from any terminal to build, deploy, and manage Salesforce applications.
Agentforce Vibes 2.0, the company’s own native development environment, now includes what it calls an “open agent harness” supporting both the Anthropic agent SDK and the OpenAI agents SDK. As demonstrated during the keynote, developers can choose between Claude Code and OpenAI agents depending on the task, with the harness dynamically adjusting available capabilities based on the selected agent. The environment also adds multi-model support, including Claude Sonnet and GPT-5, along with full org awareness from the start.
A significant technical addition is native React support on the Salesforce platform. During the keynote demo, presenters built a fully functional partner service application using React — not Salesforce’s own Lightning framework — that connected to org metadata via GraphQL while inheriting all platform security primitives. This opens up dramatically more expressive front-end possibilities for developers who want complete control over the visual layer.
The second pillar — deploy on any surface — centers on the new Agentforce Experience Layer, which separates what an agent does from how it appears, rendering rich interactive components natively across Slack, mobile apps, Microsoft Teams, ChatGPT, Claude, Gemini, and any client supporting MCP apps. During the keynote, presenters defined an experience once and deployed it across six different surfaces without writing surface-specific code. The philosophical shift is significant: rather than pulling customers into a Salesforce UI, enterprises push branded, interactive agent experiences into whatever workspace their customers already inhabit.
The third pillar — build agents you can trust at scale — introduces an entirely new suite of lifecycle management tools spanning testing, evaluation, experimentation, observation, and orchestration. Agent Script, the company’s new domain-specific language for defining agent behavior deterministically, is now generally available and open-sourced. A new Testing Center surfaces logic gaps and policy violations before deployment. Custom Scoring Evals let enterprises define what “good” looks like for their specific use case. And a new A/B Testing API enables running multiple agent versions against real traffic simultaneously.
Perhaps the most technically significant — and candid — portion of VentureBeat’s interview with Govindarjan addressed the fundamental engineering tension at the heart of enterprise AI: agents are probabilistic systems, but enterprises demand deterministic outcomes.
Govindarjan explained that early Agentforce customers, after getting agents into production through “sheer hard work,” discovered a painful reality. “They were afraid to make changes to these agents, because the whole system was brittle,” he said. “You make one change and you don’t know whether it’s going to work 100% of the time. All the testing you did needs to be redone.”
This brittleness problem drove the creation of Agent Script, which Govindarjan described as a programming language that “brings together the determinism that’s in programming languages with the inherent flexibility in probabilistic systems that LLMs provide.” The language functions as a single flat file — versionable, auditable — that defines a state machine governing how an agent behaves. Within that machine, enterprises specify which steps must follow explicit business logic and which can reason freely using LLM capabilities.
Salesforce open-sourced Agent Script this week, and Govindarjan noted that Claude Code can already generate it natively because of its clean documentation. The approach stands in sharp contrast to the “vibe coding” movement gaining traction elsewhere in the industry. As the Wall Street Journal recently reported, some companies are now attempting to vibe-code entire CRM replacements — a trend Salesforce’s Headless 360 directly addresses by making its own platform the most agent-friendly substrate available.
Govindarjan described the tooling as a product of Salesforce’s own internal practice. “We needed these tools to make our customers successful. Then our FDEs needed them. We hardened them, and then we gave them to our customers,” he told VentureBeat. In other words, Salesforce productized its own pain.
Govindarjan drew a revealing distinction between two fundamentally different agentic architectures emerging in the enterprise — one for customer-facing interactions and one he linked to what he called the “Ralph Wiggum loop.”
Customer-facing agents — those deployed to interact with end customers for sales or service — demand tight deterministic control. “Before customers are willing to put these agents in front of their customers, they want to make sure that it follows a certain paradigm — a certain brand set of rules,” Govindarjan told VentureBeat. Agent Script encodes these as a static graph — a defined funnel of steps with LLM reasoning embedded within each step.
The “Ralph Wiggum loop,” by contrast, represents the opposite end of the spectrum: a dynamic graph that unrolls at runtime, where the agent autonomously decides its next step based on what it learned in the previous step, killing dead-end paths and spawning new ones until the task is complete. This architecture, Govindarjan said, manifests primarily in employee-facing scenarios — developers using coding agents, salespeople running deep research loops, marketers generating campaign materials — where an expert human reviews the output before it ships.
“Ralph Wiggum loops are great for employee-facing because employees are, in essence, experts at something,” Govindarjan explained. “Developers are experts at development, salespeople are experts at sales.”
The critical technical insight: both architectures run on the same underlying platform and the same graph engine. “This is a dynamic graph. This is a static graph,” he said. “It’s all a graph underneath.” That unified runtime — spanning the spectrum from tightly controlled customer interactions to free-form autonomous loops — may be Salesforce’s most important technical bet, sparing enterprises from maintaining separate platforms for different agent modalities.
Salesforce’s embrace of openness at TDX was striking. The platform now integrates with OpenAI, Anthropic, Google Gemini, Meta’s LLaMA, and Mistral AI models. The open agent harness supports third-party agent SDKs. MCP tools work from any coding environment. And the new AgentExchange marketplace unifies 10,000 Salesforce apps, 2,600-plus Slack apps, and 1,000-plus Agentforce agents, tools, and MCP servers from partners including Google, Docusign, and Notion, backed by a new $50 million AgentExchange Builders Initiative.
Yet Govindarjan offered a surprisingly candid assessment of MCP itself — the protocol Anthropic created that has become a de facto standard for agent-tool communication.
“To be very honest, not at all sure” that MCP will remain the standard, he told VentureBeat. “When MCP first came along as a protocol, a lot of us engineers felt that it was a wrapper on top of a really well-written CLI — which now it is. A lot of people are saying that maybe CLI is just as good, if not better.”
His approach: pragmatic flexibility. “We’re not wedded to one or the other. We just use the best, and often we will offer all three. We offer an API, we offer a CLI, we offer an MCP.” This hedging explains the “Headless 360” naming itself — rather than betting on a single protocol, Salesforce exposes every capability across all three access patterns, insulating itself against protocol shifts.
Engine, the B2B travel management company featured prominently in the keynote demos, offered a real-world proof point for the open ecosystem approach. The company built its customer service agent, Ava, in 12 days using Agentforce and now handles 50% of customer cases autonomously. Engine runs five agents across customer-facing and employee-facing functions, with Data 360 at the heart of its infrastructure and Slack as its primary workspace. “CSAT goes up, costs to deliver go down. Customers are happier. We’re getting them answers faster. What’s the trade off? There’s no trade off,” an Engine executive said during the keynote.
Underpinning all of it is a shift in how Salesforce gets paid. The company is moving from per-seat licensing to consumption-based pricing for Agentforce — a transition Govindarjan described as “a business model change and innovation for us.” It’s a tacit acknowledgment that when agents, not humans, are doing the work, charging per user no longer makes sense.
Govindarjan framed the company’s evolution in architectural terms. Salesforce has organized its platform around four layers: a system of context (Data 360), a system of work (Customer 360 apps), a system of agency (Agentforce), and a system of engagement (Slack and other surfaces). Headless 360 opens every layer via programmable endpoints.
“What you saw today, what we’re doing now, is we’re opening up every single layer, right, with MCP tools, so we can go build the agentic experiences that are needed,” Govindarjan told VentureBeat. “I think you’re seeing a company transforming itself.”
Whether that transformation succeeds will depend on execution across thousands of customer deployments, the staying power of MCP and related protocols, and the fundamental question of whether incumbent enterprise platforms can move fast enough to remain relevant when AI agents can increasingly build new systems from scratch. The software sector’s bear market, the financial pressures bearing down on the entire industry, and the breathtaking pace of LLM improvement all conspire to make this one of the highest-stakes bets in enterprise technology.
But there is an irony embedded in Salesforce’s predicament that Headless 360 makes explicit. The very AI capabilities that threaten to displace traditional software are the same capabilities that Salesforce now harnesses to rebuild itself. Every coding agent that could theoretically replace a CRM is now, through Headless 360, a coding agent that builds on top of one. The company is not arguing that agents won’t change the game. It’s arguing that decades of accumulated enterprise data, workflows, trust layers, and institutional logic give it something no coding agent can generate from a blank prompt.
As Benioff declared on CNBC’s Mad Money in March: “The software industry is still alive, well and growing.” Headless 360 is his company’s most forceful attempt to prove him right — by tearing down the walls of the very platform that made Salesforce famous and inviting every agent in the world to walk through the front door.
Parker Harris, Salesforce’s co-founder, captured the bet most succinctly in a question he posed last month: “Why should you ever log into Salesforce again?”
If Headless 360 works as designed, the answer is: You shouldn’t have to. And that, Salesforce is wagering, is precisely what will keep you paying for it.
The journey from a laboratory hypothesis to a pharmacy shelf is one of the most grueling marathons in modern industry, typically spanning 10 to 15 years and billions of dollars in investment.
Progress is often stymied not just by the inherent mysteries of biology, but by the “fragmented and difficult to scale” workflows that force researchers to manually pivot between the actual experimental design equipment, software, and databases.
But OpenAI is releasing a new specialized model GPT-Rosalind specifically to speed up this process and make it more efficient, easier, and ideally, more productive. Named after the pioneering chemist Rosalind Franklin, whose work was vital to the discovery of DNA’s structure (and was often overlooked for her male colleagues James Watson and Francis Crick), this new frontier reasoning model is purpose-built to act as a specialized intelligence layer for life sciences research.
By shifting AI’s role from a general-purpose assistant to a domain-specific “reasoning” partner, OpenAI is signaling a long-term commitment to biological and chemical discovery.
GPT-Rosalind isn’t just about faster text generation; it is designed to synthesize evidence, generate biological hypotheses, and plan experiments—tasks that have traditionally required years of expert human synthesis.
At its core, GPT-Rosalind is the first in a new series of models optimized for scientific workflows. While previous iterations of GPT excelled at general language tasks, this model is fine-tuned for deeper understanding across genomics, protein engineering, and chemistry.
To validate its capabilities, OpenAI tested the model against several industry benchmarks. On BixBench, a metric for real-world bioinformatics and data analysis, GPT-Rosalind achieved leading performance among models with published scores.
In more granular testing via LABBench2, the model outperformed GPT-5.4 on six out of eleven tasks, with the most significant gains appearing in CloningQA—a task requiring the end-to-end design of reagents for molecular cloning protocols.
The model’s most striking performance signal came from a partnership with Dyno Therapeutics. In an evaluation using unpublished, “uncontaminated” RNA sequences, GPT-Rosalind was tasked with sequence-to-function prediction and generation.
When evaluated directly in the Codex environment, the model’s submissions ranked above the 95th percentile of human experts on prediction tasks and reached the 84th percentile for sequence generation.
This level of expertise suggests the model can serve as a high-level collaborator capable of identifying “expert-relevant patterns” that generalist models often overlook.
OpenAI is not just releasing a model; it is launching an ecosystem designed to integrate with the tools scientists already use. Central to this is a new Life Sciences research plugin for Codex, available on GitHub.
Scientific research is famously siloed. A single project might require a researcher to consult a protein structure database, search through 20 years of clinical literature, and then use a separate tool for sequence manipulation. The new plugin acts as an “orchestration layer,” providing a unified starting point for these multi-step questions.
Skill Set: The package includes modular skills for biochemistry, human genetics, functional genomics, and clinical evidence.
Connectivity: It connects models to over 50 public multi-omics databases and literature sources.
Efficiency: This approach targets “long-horizon, tool-heavy scientific workflows,” allowing researchers to automate repeatable tasks like protein structure lookups and sequence searches.
Given the potential power of a model capable of redesigning biological structures, OpenAI is eschewing a broad “open-source” or general public release in favor of a Trusted Access program.
The model is launching as a research preview specifically for qualified Enterprise customers in the United States. This restricted deployment is built on three core principles: beneficial use, strong governance, and controlled access.
Organizations requesting access must undergo a qualification and safety review to ensure they are conducting legitimate research with a clear public benefit.
Unlike general-use models, GPT-Rosalind was developed with heightened enterprise-grade security controls. For the end-user, this means:
Restricted Access: Usage is limited to approved users within secure, well-managed environments.
Governance: Participating organizations must maintain strict misuse-prevention controls and agree to specific life sciences research preview terms.
Cost: During the preview phase, the model will not consume existing credits or tokens, allowing researchers to experiment without immediate budgetary constraints (subject to abuse guardrails).
The announcement garnered significant buy-in from OpenAI parnters across the pharmaceutical and technology sectors.
Sean Bruich, SVP of AI and Data at Amgen, noted that the collaboration allows the company to apply advanced tools in ways that could “accelerate how we deliver medicines to patients”.The impact is also being felt in the specialized tech infrastructure that supports labs:
NVIDIA: Kimberly Powell, VP of Healthcare and Life Sciences, described the convergence of domain reasoning and accelerated computing as a way to “compress years of traditional R&D into immediate, actionable scientific insights”.
Moderna: CEO Stéphane Bancel highlighted the model’s ability to “reason across complex biological evidence” to help teams translate insights into experimental workflows.
The Allen Institute: CTO Andy Hickl emphasized that GPT-Rosalind stands out for making manual steps—like finding and aligning data—more “consistent and repeatable in an agentic workflow”.
This builds on tangible results OpenAI has already seen in the field, such as its collaboration with Ginkgo Bioworks, where AI models helped achieve a 40% reduction in protein production costs.
OpenAI’s mission with GPT-Rosalind is to narrow the gap between a “promising scientific idea” and the actual “evidence, experiments, and decisions” required for medical progress.
By partnering with institutions like Los Alamos National Laboratory to explore AI-guided catalyst design and biological structure modification, the company is positioning GPT-Rosalind as more than a tool—it is meant to be a “capable partner in discovery”.
As the life sciences field becomes increasingly data-dense, the move toward specialized “reasoning” models like Rosalind may become the standard for navigating the “vast search spaces” of biology and chemistry.
Confirming it has reached 3 million weekly developers, OpenAI is massively updating its Codex developer environment via its Mac and Windows desktop apps today to bring it closer to the “Super App” the company has confirmed it is pursuing.
Before today, Codex was primarily an environment for using OpenAI’s underlying language models to write, edit, debug and ship software as directed by the user.
Now, Codex will be able to access all of the other apps on your computer, surface relevant information from within them to you when asked or proactively, take actions as directed in said applications, and, in the case of Mac users, even do so while you continue manually using your computer simultaneously to your agents working in the background.
Andrew Ambrosino, an OpenAI technical staffer on the Codex team, described the change plainly in an embargoed press briefing I attended virtually yesterday: “Codex can actually click on apps, launch apps, and type into apps. This works with any apps on your machine.”
Codex on desktop is further getting its own built-in web browser, allowing users to preview their front-end development, and a directly integrated pipeline to OpenAI’s powerful AI image generation model gpt-image-1.5, allowing users to generate imagery for their projects — everything from websites to presentations to full playable PC games with hundreds of assets — all in the same style.
As Thibault “Tibo” Sottiaux, Head of Codex at OpenAI, said during the briefing: “It’s not just about the growth. It is putting a very capable agent in the hands of builders, and now we’re seeing that we’re able to expand and do a lot more work entirely across your computer”
Asked why OpenAI was pursuing all this in Codex, not its more recognizable flagship app, ChatGPT, Sottiaux told VentureBeat: “Codex is our most powerful agent.It already worked on your computer, and so we’re expanding the capabilities there. It felt very natural. We will make it make sense at some point.”
The update comes as rival Anthropic has previously courted similar use cases with the launch of its Claude Cowork and redesigned Claude Code desktop app views, all available within the Claude desktop app for Mac and Windows. But Claude does not allow for simultaneous background app cursor usage from the desktop app across all of a user’s apps like Codex does.
The most significant technological leap in this release is “Computer Use,” limited for now to macOS users.
This feature allows Codex to break out of the traditional chatbot container to “see, click, and type” across all applications on a machine.
Crucially, this happens in the background. “It can use apps on your computer in the background, as opposed to taking over your entire computer,” explained Caffrey Lynch of OpenAI’s developer product communications.
This enables “multi-agent” workflows where Codex might be testing a frontend change or triaging a JIRA ticket while the developer continues working in a different application.
For Windows users, the core Codex desktop app remains available and supported — as does pulling information in from those apps to surface to the user in Codex — though it lacks the cursor-level background interaction available on Mac at launch.
Beyond operating the OS, OpenAI is doubling down on the “Software Development Lifecycle” (SDLC). The Codex app now functions more like a unified workspace, supporting everything from GitHub PR reviews to managing remote infrastructure.
“The simplest way to think about this release is teaching Codex and the app to work across a much larger surface area,” said Andrew Ambrosino, lead of Codex app development. This surface area now includes:
Integrated Browser: An in-app browser allows developers to iterate on frontend designs by commenting directly on DOM elements, providing precise instructions for the agent to follow.
Visual Primitives: By integrating gpt-image-1.5, Codex can now generate and iterate on images for mockups and game assets directly within the development workflow.
Expanded Sidebar: The app now includes rich previews for non-code files such as PDFs, spreadsheets, and slide decks, alongside a summary pane to track agent plans and sources.
Terminal & SSH: The update adds support for multiple terminal tabs and an alpha feature for connecting to remote devboxes via SSH.
To connect these disparate tasks, OpenAI is releasing more than 90 new plugins. These connectors—including CircleCI, GitLab, and Microsoft Suite—allow the agent to gather context and take action across the entire toolchain a developer uses daily.
In a demo video shown off during the briefing, OpenAI presented an example showing the user typing into the Codex prompt entry field, “Can you check Slack, Gmail, Google Calendar, and Notion and tell me what needs my attention?” showing how Codex can now scan across multiple apps and gather information from them all in single prompt, and surface what matters most to the user.
“You can @ mention them if you want Codex to use a specific app, or if not, Codex can discover which apps to use,” Ambrosino said.
One of the more subtle but powerful shifts is the introduction of persistent agency. Through “Heartbeat Automations,” Codex can now schedule future work for itself and “wake up” to continue long-term tasks.
This allows teams to set up agents that monitor Slack channels or Notion docs and proactively update documentation or landing PRs.
This is supported by a new “Memory” feature, currently in preview. Memory allows Codex to remember personal preferences, previous corrections, and gathered information, reducing the need for extensive custom instructions in every new session.
“As you use Codex, Codex also becomes better at being proactive,” noted Sottiaux.
This proactivity manifests in a “daily brief” style feature where the app suggests how to start the day by identifying open Google Doc comments or relevant Slack context.
It’s similar in spirit and practice to the new “Routines” feature launched by Anthropic for its Claude Code product earlier this week.
OpenAI has recently transitioned toward a more flexible pricing model for teams, including a $100 plan and pay-as-you-go options to accommodate the increased usage of autonomous agents. For individual users, these updates are rolling out today to those signed in to the Codex desktop app with ChatGPT.
While the Codex desktop app is available on both macOS and Windows, the rollout of specific features is tiered:
Background Computer Use: macOS only at launch.
Personalization (Memory/Suggestions): Coming soon for Enterprise, Edu, EU, and UK users.
Core Software Development Life Cycle Updates: Available to all desktop app users starting today.
When asked if these features represent the foundation of an AI “Super App,” Sottiaux confirmed the strategy: “We’re building the Super App in the open and evolving it out of the Codex app”.
The goal is to address the reality that developers spend a majority of their time on coordination and context-gathering rather than writing code.
By bringing Codex closer to the operating system and the broader ecosystem of developer tools, OpenAI is positioning it as the central nervous system for modern software development.
“Our mission is to ensure that AGI benefits all of humanity,” the company stated in its official announcement. “That means narrowing the gap between what people can imagine and what they can actually build”.
If we’re racing to build machines that are both more intelligent than we are and increasingly autonomomous, then we’re building agents that are functionally sovereign
Selection of Spielberg’s best films will be available in a strictly limited edition 4K Steelbook Library Case package.