Samsung Galaxy S26 is here. Discover the new privacy screen, Bixby AI upgrades, and 200MP camera specs. Check out the official pricing and release date now.
The new Samsung flagship phones are out, and one innovation stands out as unique and important.
Lawmakers are advancing policies that would weaken or dismantle the shareholder rights, threatening investors’ ability to engage companies on risks and long‑term value.
The Danish drugmaker is collaborating with Vivtex, cofounded by billionaire Robert Langer, to quickly iterate oral versions of successors to Ozempic and Wegovy.
D-Link Corporation has announced the expansion of its 5G/4G mobile access lineup, including the D-Link F518 5G NR 1800 Wi-Fi hotspot and the D-Link D501 5G NR USB dongle.
Gong, the revenue intelligence company that has spent a decade turning recorded sales calls into data, today launched what it calls Mission Andromeda — its most ambitious platform release to date, bundling a new AI-powered coaching product, a sales-focused chatbot, unified account management tools, and open interoperability with rival AI systems through the Model Context Protocol.
The release arrives at a pivotal moment. The revenue technology market is consolidating at a pace that would have been unthinkable two years ago, and Gong — still a private company with roughly $300 million in annual recurring revenue — finds itself at the center of a category that Gartner only formally defined three months ago. Mission Andromeda is Gong’s answer to a basic question facing every enterprise AI vendor in 2026: Can you move beyond surfacing insights and actually change how people work?
“The whole show, Andromeda, is basically a collection of very significant capabilities that take us a huge step forward,” Eilon Reshef, Gong’s co-founder and chief product officer, told VentureBeat in an interview ahead of the launch. He described it as an effort to make revenue teams “more productive as individuals” and to give leaders “better decisions” — positioning the release not as a feature dump, but as an operating system upgrade.
Mission Andromeda contains four main components, each targeting a different layer of the sales workflow.
The headliner is Gong Enable, a brand-new product with its own pricing tier — Reshef described it as “in the tens of dollars per seat per month” — that attacks what the company sees as a gaping hole in most sales organizations: the disconnect between training and performance. Highspot and Seismic announced their intent to merge in February 2026, creating a combined enablement giant, and Gong is now moving directly onto their turf.
Gong Enable has three pieces. The first, AI Call Reviewer, analyzes completed customer calls and grades reps based on their organization’s own methodology. When asked whether this operates in real time, Reshef was direct: “For that particular agent, it’s post-call, because obviously you want to grade the whole call as a whole — maybe you didn’t do anything in minute one, minute 30.” The second piece, AI Trainer, lets reps practice high-stakes conversations — pricing objections, renewal risk scenarios — against AI-generated simulations built from the company’s own winning call patterns. The third, Initiative Tracking, links coaching programs to revenue metrics so leaders can see whether new behaviors actually show up in live deals.
Beyond Enable, the launch includes Gong Assistant, a conversational AI chatbot purpose-built for revenue teams that lets users ask questions about customer calls inside the platform. The release also introduces Account Console and Account Boards, which unify customer activity, risk signals, and next steps into a single view for sales and post-sales teams. And rounding out the package is built-in support for the Model Context Protocol, the open standard originally developed by Anthropic, enabling Gong to exchange data with AI systems from Microsoft, Salesforce, HubSpot, and others.
In a market where every company wants to claim proprietary AI supremacy, Reshef described a notably pragmatic approach to the models powering the new features. Gong uses both internal models and foundation models from external providers, he said, noting that “four out of the five leading AI companies, LLM, are basically Gong customers.”
The company picks models task by task. “Based on the product or task at hand, we pick the right model,” he said. “We would sometimes swap in and out a model if we feel it’s best for our customers and they get more and more power.” Reshef drew a clear line between what needs a large language model and what does not: “Our revenue prediction models are not using LLMs, but kind of the core interaction chatbots — of course, you’re going to use the foundation model.”
This approach contrasts with competitors that have hitched their wagon to a single AI provider. It also reflects a philosophical choice: Gong’s real moat, Reshef suggested, is not the models themselves but the data underneath — what the company calls the Revenue Graph, its proprietary layer that captures phone calls, Zoom meetings, emails, text messages, WhatsApp conversations, and more, stitching them together into a connected intelligence layer.
Storing and analyzing every customer conversation a sales team has raises obvious questions about privacy and data governance. Reshef was eager to address them head-on.
“We’ve been around the block for a long while — a little bit over a decade — with AI first,” he said. “Over the years, we’ve developed exactly those capabilities that are the most boring pieces of AI, which is: how do you collect the right data? How do you manage it? How do you manage permissions about it, retention policies, right to be forgotten?”
On the sensitive question of whether Gong trains its AI on customer data across accounts, Reshef drew a firm boundary. Training, he explained, happens per customer: “The majority of the training happens based on each customer’s data.” He pointed to large accounts like Cisco, which he said has 20,000 Gong users — enough data to train the AI Trainer from within their own environment. “AI Trainer can go mine what’s working in their environment. It might not work in their competitor’s environment — maybe their benefits are different, their objections are different.”
Cross-customer training, he said, happens “only in very, very rare cases, very safe based — like transcription. But we don’t do it for business-specific processes.”
Gong’s support for Model Context Protocol is perhaps the most strategically significant piece of the launch. The company now offers built-in client and server support for MCP, enabling organizations to connect Gong with other AI systems while maintaining clear controls over data access, usage, and provenance. Gong first announced MCP support in October 2025 at its Celebrate conference, where it revealed initial integrations with Microsoft Dynamics 365, Microsoft 365 Copilot, Salesforce Agentforce, and HubSpot CRM. Today’s launch builds on that foundation.
But Reshef did not sugarcoat MCP’s limitations. “MCP is very immature when it comes to security,” he told VentureBeat. The protocol lets enterprise AI systems share data and context, but trust remains the enterprise’s responsibility. He explained a two-sided model: Gong can pull data from partners like Zendesk through certified integrations, and simultaneously makes its own MCP server available so that tools like Microsoft Copilot can query Gong’s data. “It’s up to the company which connections they actually feel are secure enough,” he said. “The safest ones are the ones that we’ve kind of like certified in a way. But MCP is an open protocol. They can connect it to their own systems. We have no control over this.”
That candor matters. As MCP adoption accelerates across the enterprise software stack, security teams are scrambling to understand what happens when agentic AI systems start talking to each other without humans in the loop. Gong appears to be betting that transparency about the protocol’s immaturity will build more trust than marketing bravado.
When asked for hard numbers, Reshef offered a mix of platform-wide results and measured candor about the newest features. Existing Gong customers report roughly a 50 percent reduction in sales rep ramp time and 10 to 15 percent improvements in win rates, he said.
But on Gong Enable specifically, he acknowledged the product is still brand new. “The trainer has been in the market for literally, you know, days, a week,” he said. “I would probably lie to you if I said, ‘Hey, we’re already seeing people crushing it after taking three or four courses.'” For the earlier version of Enable that includes the AI Call Reviewer, however, he said customers are “definitely seeing a very high kind of skill improvement” and are attributing increases in win rates and quota attainment to those gains — though he conceded that “it’s always hard to do 100 percent attribution.”
Morningstar, one of Gong’s early adopters, offered a pre-launch endorsement. Rae Cheney, Director of Sales Enablement Technology at Morningstar, said in a statement that Gong Enable helped the firm “spend less time on status updates and more time on the work that actually moves deals.”
One of the more interesting threads in Reshef’s remarks concerned his view of AI autonomy — or rather, its limits. He pushed back on what he called a “common misperception about AI” — that it operates completely autonomously.
“There has to be a person in the middle, which I call operator,” he said. “It could be RevOps. It could be enablement. In the case of training, it could be analysts. Sometimes it could be even business leaders.” Those operators, he argued, are responsible for a “repeatable process of AI doing something, measuring the AI” and adjusting over time.
This philosophy extends to the AI Call Reviewer’s feedback. Gong does not dictate what the system trains on — enablement leaders choose. “We don’t decide what they want to train on. We let them choose,” Reshef said. “You iterate, you optimize, you see how it goes, and there has to be somebody in the organization who’s responsible for making sure this aligns with the business needs.”
That stance puts Gong at odds with the more aggressive “autonomous agent” rhetoric emerging from some competitors, and it may resonate with enterprise buyers who remain cautious about letting AI run unsupervised in revenue-critical workflows.
Mission Andromeda does not exist in a vacuum. The revenue AI landscape has been reshaped by a remarkable wave of consolidation over the past six months.
In a category-defining move, Clari and Salesloft merged in December 2025 to form what they called a “Revenue AI powerhouse,” combining roughly $450 million in ARR under new CEO Steve Cox. Just two weeks ago, Highspot and Seismic signed a definitive agreement to merge, creating a combined entity worth more than $6 billion focused on AI-powered sales enablement — the very same territory Gong is now invading with Enable.
Meanwhile, Gong was named a Leader in the inaugural 2025 Gartner Magic Quadrant for Revenue Action Orchestration, published in December. The company placed highest among the 12 vendors evaluated on both the “Ability to Execute” and “Completeness of Vision” axes and ranked first in all four evaluated use cases in Gartner’s companion Critical Capabilities report.
In his interview, Reshef did not name competitors directly, but he drew a clear contrast. “We’ve built a product from the ground up. It’s all organic,” he said. “All of the other players in the field have sort of stitched together tools. And obviously you can’t just get it to be a coherent product if you just stitch together tools. Some of them even have multiple logins.” That is a thinly veiled shot at the merged Clari-Salesloft entity, which Forrester has described as presenting a “bifurcated approach” — Salesloft serving frontline users while Clari supports management insights.
Reshef also pointed to growth as a competitive weapon. “We’re growing at the top deck side in terms of SaaS companies,” he said, adding that Gong hired roughly 200 R&D employees this year and plans to hire another 200. “It’s kind of a flywheel where we can invest more in R&D, we make the product better, we get more capabilities, more flexibility, more enterprise customers.”
Any discussion of Gong’s trajectory inevitably raises the question of a public offering. When asked directly, Reshef declined to comment: “I wouldn’t comment on IPO at this stage. No.”
The company has been on a clear growth arc. Gong has raised approximately $584 million to date, with its last official funding round valuing it at $7.25 billion — the culmination of a series of rapid jumps from $750 million in 2019 to $2.2 billion in 2020. The company reached an annual sales run rate of approximately $300 million in January 2025, driven largely by the adoption of AI, according to Calcalist.
But that valuation has since slipped. As Calcalist reported in November 2025, Gong is conducting a secondary round for company employees and investors at a valuation of roughly $4.5 billion — well below its 2021 peak. The offering is being conducted through Nasdaq’s private market platform and was in advanced stages at the time of the report. It is not yet clear whether the company has repriced employee options, some of which were issued at significantly higher valuations than the current secondary round. Gong told Calcalist that it “regularly receives inquiries from potential investors” but as a private company does not “engage in speculation.”
The structured quarterly launch cadence that Mission Andromeda inaugurates — complete with galactic naming conventions and coordinated product narratives — certainly resembles the kind of predictable, story-driven approach that public market investors reward. Reshef framed it differently: “We felt like having quarterly launches with a name, a mission, and a story around it makes it easier to work… It’s a good way to educate the market on a regular basis.”
Reshef’s most revealing comment came when he laid out the company’s long-term thesis: Gong aims to increase productivity for revenue professionals by 50 percent. “We’re not there yet,” he admitted. “I think we’re like at 20 to 30 — whatever, hard to measure.”
He broke the productivity gain into two categories. The first is making high-complexity human tasks — like conducting a live Zoom sales call — better, through coaching, training, and review. “I think there’s going to be a long while, if ever, that Zoom conversations are going to get replaced by bots,” he said. The second is automating the manual drudgery: call preparation, post-meeting summaries, follow-up emails, account research briefs.
The distinction matters because it frames Gong’s ambition not as replacing salespeople but as making them dramatically more effective — a message calibrated to appeal to the thousands of revenue leaders who control Gong’s buying decisions. Whether that thesis holds will depend on whether Gong Enable, the AI Trainer, and the rest of Mission Andromeda can deliver measurable gains in a market that has been burned before by tools that promise insight but struggle to change behavior.
Gong currently serves more than 5,000 companies worldwide. The Clari-Salesloft merger has produced a rival with deeper combined resources. The Highspot-Seismic combination is assembling a sales enablement colossus. And a new Gartner category means every enterprise buyer now has a framework for comparison shopping. The next twelve months will test whether Mission Andromeda is the release that cements Gong’s position at the center of the revenue AI category — or the last big swing before the consolidated giants close in.
“Our mission is to be at the forefront,” Reshef said. “If everybody else is doing 20 percent, we’re going to do 50. If everybody is going to do 50, we’re going to do 80.”
In the revenue AI wars, that kind of confidence is easy to project. Delivering on it, with brand-new products still days old and a market being remade around you in real time, is something else entirely.
Every Indian AI model is graded on benchmarks built in San Francisco. GPT-5 scores below 40% on Indian cultural reasoning. India needs to own the scoreboard, not just the model.
This practical guide explains how to design agentic workflows using a simple inputs, tasks and outputs framework, so any business leader can move from idea to execution.
Anthropic opened its virtual “Briefing: Enterprise Agents” event on Tuesday with a provocation. Kate Jensen, the company’s head of Americas, told viewers that the hype around enterprise AI agents in 2025 “turned out to be mostly premature,” with many pilots failing to reach production. “It wasn’t a failure of effort, it was a failure of approach, and it’s something we heard directly from our customers,” Jensen said.
The implicit promise: Anthropic has figured out the right approach, and it starts with the playbook that made Claude Code one of the most consequential developer tools of the past year. “In 2025 Claude transformed how developers work, and in 2026 it will do the same for knowledge work,” Jensen said. “The magic behind Claude Code is simple. When you can delegate hard challenges, you can focus on the work that actually matters. Cowork brings that same power to knowledge workers.”
That framing is central to understanding what Anthropic announced on Tuesday. The company rolled out a sweeping set of enterprise capabilities for Claude Cowork, the AI productivity platform it first released in research preview in January. Scott White, head of product for Claude Enterprise, described the ambition plainly during the keynote: “Cowork makes it possible for Claude to deliver polished, near final work. It goes beyond drafts and suggestions — actual completed projects and deliverables.”
The product updates are dense but consequential. Enterprise administrators can now build private plugin marketplaces tailored to their organizations, connecting to private GitHub repositories as plugin sources and controlling which plugins employees can access. Anthropic introduced new prebuilt plugin templates spanning HR, design, engineering, operations, financial analysis, investment banking, equity research, private equity, and wealth management. The company also shipped new MCP connectors for Google Drive, Google Calendar, Gmail, DocuSign, Apollo, Clay, Outreach, SimilarWeb, MSCI, LegalZoom, FactSet, WordPress, and Harvey — dramatically extending Claude’s reach into the software ecosystem that enterprises already use. And Claude can now pass context seamlessly between Cowork, Excel, and PowerPoint, including across multiple files, without requiring users to restart when switching applications.
White emphasized that the system is designed to feel native to each organization rather than generic. “We’ve heard loud and clear from enterprises — you want Claude to work the way that your company works, not just Claude for legal, but Cowork for legal at your company,” he said. “That’s exactly what today’s launches deliver.”
To ground the product announcements in measurable outcomes, Anthropic showcased three enterprise deployments that illustrate both the scale and the variety of impact the company claims Claude can deliver.
At Spotify, engineers had long struggled with code migrations — the slow, manual work of updating and modernizing code across thousands of services. Jensen explained that after integrating Claude directly into the system Spotify’s engineers use daily, “any engineer can kick off a large-scale migration just by describing what they need in plain English.” The company reports up to a 90% reduction in engineering time, over 650 AI-generated code changes shipped per month, and roughly half of all Spotify updates now flowing through the system.
At Novo Nordisk, the pharmaceutical giant built an AI-powered platform called NovoScribe with Claude as its intelligence layer, targeting the grueling process of producing regulatory documentation for new medicines. Staff writers had previously averaged just over two reports per year. After deploying Claude, Jensen said, “documentation creation went from 10 plus weeks to 10 minutes. That’s a 95% reduction in resources for verification checks. Medicines are reaching patients faster.” Jensen also noted that Novo Nordisk used Claude Code to build the platform itself, enabling contributions from non-engineers — their digitalization strategy director, who holds a PhD in molecular biology rather than engineering, now prototypes features using natural language. “A team of 11 is operating like a team many times its size,” Jensen said.
Salesforce, meanwhile, uses Claude models to help power AI in Slack, reporting a 96% satisfaction rate for tools like its Slack bot and saving customers an estimated 97 minutes per week through summarization and recap features. The partnership reflects Anthropic’s broader ecosystem strategy: Jensen described the companies featured at the event as “Claude partners and domain experts with the data and trusted relationships that make Claude work in the real world.”
Perhaps the most illuminating segment of the event was a panel discussion featuring executives from Thomson Reuters, the New York Stock Exchange, and Epic, who provided candid assessments of AI’s enterprise reality that went well beyond the polished case studies.
Sridhar Masam, CTO of the New York Stock Exchange, described his organization as “rewiring our engineering process” with Claude Code and building internal AI agents using the Claude Agent SDK that can take instructions from a Jira ticket all the way to a committed piece of code. But he also identified fundamental shifts in how leaders must think. “The accountability is shifting,” he said. “Traditionally, we are so used to building deterministic platforms. You write code requirements and build. And now, with AI being probabilistic, the accountability doesn’t end when the project goes live, but on a daily basis, monitoring the behavior and outcomes.” He described a new paradigm beyond “buy versus build” — what he called “assembly,” the practice of combining multiple models, multiple vendors, platforms, data, and internal capabilities into solutions. And he noted that highly regulated industries must shift “from risk avoidance to risk calibration,” because simply avoiding AI is no longer a competitive option.
Steve Haske from Thomson Reuters, whose Co-Counsel product has reached a million users, was frank about the gap between what the technology can do and what organizations are ready for. “The tools are in many senses ahead of the change management,” he said. “A general counsel’s office, a law firm, a tax and accounting firm, an audit firm, need to rewire the processes to be able to take advantage of the benefits that the tools provide. And I think it’s 18 months away before that sort of change management catches up with the standard of the tool.” He also stressed an “ironclad guarantee” to Co-Counsel customers that “their input will not be part of our AI output,” and urged enterprise leaders to be “feverish” about protecting institutional intellectual property.
Seth Hain from Epic — the healthcare technology company behind MyChart — offered a finding that may foreshadow where enterprise AI adoption is truly heading. “Over half of our use of Claude Code is by non-developer roles across the company,” Hain said, describing how support and implementation staff had adopted the tool in ways the company never anticipated. Hain also described a deliberate trust-building strategy: Epic’s first AI capability was a medical record summarization that included links to the underlying source material, giving clinicians the ability to verify and build confidence before the company introduced more autonomous agent capabilities.
Tuesday’s announcements cannot be understood in isolation. They are essentially the culmination of a year in which Anthropic transformed itself from a research-focused AI lab into a company with genuine enterprise distribution and developer ecosystem gravity.
The trajectory began with Claude Code, which Jensen noted had taken coding use cases “from assisting on tiny tasks to AI writing 90 or sometimes even 100% of the code, with enterprises shipping in weeks what once took many quarters.” But the deeper structural shift was the adoption of MCP — the Model Context Protocol — which has become the connective tissue allowing Claude to reach into and act upon data across an organization’s entire technology stack. Where previous AI tools were constrained to the information users manually fed them, MCP-connected Claude can pull context from Slack threads, Google Drive documents, CRM records, and financial systems simultaneously. This is what makes the plugin architecture announced Tuesday fundamentally different from earlier chatbot-style enterprise AI: it turns Claude into a reasoning layer that sits across an organization’s existing infrastructure rather than alongside it.
The implications for the broader AI industry are profound. Anthropic is effectively building a platform play — private plugin marketplaces, portable file-based plugins, and an expanding library of MCP connectors — that echoes the ecosystem strategies of earlier platform giants like Salesforce and Microsoft. The difference is velocity: Anthropic is compressing into months the kind of ecosystem development that previously took years. The company’s willingness to ship sector-specific plugin templates for investment banking, equity research, and wealth management alongside general-purpose tools signals that it sees no bright line between platform and application, between enabling partners and competing with them.
This strategic ambiguity is precisely what has spooked Wall Street. IBM shares suffered their worst single-day loss since October 2000 — down nearly 13.2% — on Monday after Anthropic published a blog post about using Claude Code to modernize COBOL, the decades-old programming language that runs on IBM’s mainframe systems. Enterprise software stocks had already been under heavy pressure since the initial Cowork announcement on January 30, with companies like ServiceNow, Salesforce, Snowflake, Intuit, and Thomson Reuters all experiencing steep declines. Cybersecurity companies tumbled after the company unveiled Claude Code Security on February 20.
Yet Tuesday’s event triggered a partial reversal that revealed something important about how markets are processing AI disruption. Companies named as Anthropic partners and integration targets — Salesforce, DocuSign, LegalZoom, Thomson Reuters, FactSet — all rallied, some sharply. Thomson Reuters surged more than 11%. The market appears to be drawing a new distinction: companies integrated into Anthropic’s ecosystem may benefit, while those standing outside it face existential risk.
Peter McCrory, Anthropic’s head of economics, presented data from the Anthropic Economic Index that offered a sober counterweight to the event’s product optimism. Using privacy-preserving methods to analyze how people and businesses use Claude, McCrory’s team has tracked AI’s diffusion across more than 150 countries and every US state.
The headline finding is striking: a year ago, roughly a third of all US jobs had at least a quarter of their associated tasks appearing in Claude usage data. That figure has now risen to approximately one in every two jobs. “The scope of impact is broadening out throughout the economy as the tools and as the technology becomes more capable,” McCrory said. He characterized AI as a “general purpose technology” in the economic sense — meaning virtually no facet of the economy will be unaffected.
McCrory drew a critical distinction between automation, where Claude simply executes a task, and augmentation, where it collaborates with a human on more complex work. When businesses embed Claude through the API, he noted, “we see overwhelmingly Claude is being embedded in automated ways” — a pattern consistent with how transformative technologies have historically diffused through the economy.
On the question of job displacement, McCrory was measured but direct. He noted that “roles that typically require more years of schooling have the largest productivity or efficiency gains,” suggesting a dynamic economists call skill-biased technical change. He expressed concern about “jobs that are pure implementation” — citing data entry workers and technical writers as examples where Claude is already being used for tasks central to those occupations. But he emphasized that no evidence of widespread labor displacement has materialized yet, and pointed to forthcoming research that would introduce methodology for monitoring whether highly exposed workers are beginning to experience it.
His advice to enterprise leaders cut to the heart of the organizational challenge. “It might not just be about fundamental capabilities of the model,” McCrory said. “Do you have the right sort of data ecosystem, data infrastructure to provide the right information at the right time?” If the knowledge Claude needs to execute a sophisticated task exists only in a coworker’s head, he argued, “that’s not a technical problem, per se. That’s an organizational problem.”
Jensen described a concept Anthropic calls “the thinking divide” — the growing gap between organizations that embed AI across employees, processes, and products simultaneously, and those that treat it as a point solution. The companies on the right side of that divide, she argued, will compound their advantage over time. Those on the wrong side “will find themselves falling further and further behind.”
Whether Anthropic ultimately functions as the rising tide that lifts the enterprise software ecosystem or the wave that swamps it remains genuinely uncertain. The same event that triggered a rally in shares of Anthropic’s named partners has also accelerated a broader reckoning for legacy software companies that cannot yet articulate how they fit into an AI-native world. McCrory, the economist, counseled humility. “Capabilities are moving very, very quickly,” he said. “It might represent an innovation in the method of innovation. So it’s not just making us better at the things that we do — it’s helping us discover new ways to do things.”
Thomson Reuters’ Haske perhaps put it most practically. “As leaders, we all have to get personally involved and personally invested in using the tools,” he said. “We’ve got to move fast. This environment is changing quickly. We cannot afford to get left behind.”
A Fortune 10 CIO recently told Jensen that enterprises would need to fit a decade of innovation into the next few years. The CIO smiled and said: “We’re going to do it in one with you.” Whether that confidence proves prescient or premature, one thing is clear from Tuesday’s event — the window for figuring it out is closing faster than most boardrooms realize.
Web Search has already been disrupted by AI — just take a look at how readily Google is presenting users with AI Overviews (summaries of search results) at the top of their results pages, how Bing early on integrated OpenAI’s GPT models, and how Perplexity continues to build on its own AI-driven web search platform and browsers.
Nimble announced the launch of its Agentic Search Platform, a system designed to transform the public web into trusted, decision-grade data for AI systems and business workflows.
The launch is supported by $47 million in Series B financing led by Norwest, with participation from Databricks Ventures and others, bringing the company’s total funding to $75 million.
The initiative addresses a fundamental bottleneck in the current AI era: while large language models (LLMs) are becoming more sophisticated, they often reason over incomplete or unverifiable external information. Nimble’s platform aims to eliminate this “guesswork gap” by providing a governed data layer that searches, navigates, and validates live internet data in real time.
In an exclusive interview with VentureBeat, Nimble co-founder and CEO Uri Knorovich reflected on the early skepticism regarding his vision of a machine-centric internet.
“Whenever we started this company, and the first time I went to investors, I told them the web is built for humans, but machines are going to be the first citizens of the web,” Knorovich recalled. He noted that while initial reactions labeled him as “too visionary,” the current reality of AI adoption has validated his thesis.
The core of Nimble’s solution is a proprietary distributed architecture that orchestrates specialized agents to perform tasks traditionally handled by human researchers or brittle web scrapers. According to the company’s infrastructure documentation, the process is broken down into five distinct layers:
Headless browser and browsing agents: These layers manage the initial interaction with a target domain, navigating complex site structures as a human would.
Parsing agents: These agents interpret the page content, identifying relevant data elements across various formats.
Data processing agents: This layer aggregates, filters, and cleans noisy internet data to produce specific, structured answers.
Validation agents: The final step involves verifying the results to ensure accuracy and completeness before delivery.
Unlike standard search engines designed for consumer link-clicking, this architecture uses multimodal and reasoning capabilities from frontier models—including those from OpenAI, Anthropic, and Meta—to control real browsers. This allows Nimble to navigate dynamic layouts and cross-check results, producing auditable data outputs rather than simple text summaries.
Knorovich points out that the scale of AI interaction with the web is fundamentally different from human behavior. “We, as humans, search for maybe three or five options before we making decisions… but every day, Nimble perform more than 3.2 million interactions in the web,” he explained. This sheer volume of billions of monthly searches represents a programmatic shift that requires a new type of infrastructure.
The bottleneck for enterprises today, according to Knorovich, isn’t the intelligence of the models, but the quality of the data they can access. “Agents are the headlines, and accurate and reliable web search is the bottleneck,” he stated.
Knorovich explicitly differentiates Nimble from general-purpose tools like Google or consumer AI search assistants.
While Google has built a search experience for consumers that is optimized for speed and finding a local restaurant, enterprises require high-scale, high-accuracy results to make multi-million dollar decisions.
“General purpose web search tool are great to have a general answers, such as who is the wife Leo missing,” Knorovich remarked during the interview. “But enterprises need deep, granular data, and they need to have the ability to control the search filters, to control the regulation, to control what is a trusted source”. Unlike consumer AI modes that may summarize a Reddit post or high-level news, Nimble provides “street-level” information that can be stored directly in an enterprise system of record.
The Agentic Search Platform is delivered through two primary interfaces designed for enterprise scalability:
Web search agents: A no-code AI workflow builder that enables business teams to describe the data they need and receive structured data streams without writing a line of code.
Web tools SDK: A suite of APIs for builders to search, extract, and crawl the web directly from their code. This includes specialized tools like the /crawl API for mapping entire domains and the /map API for creating domain trees.
The platform is built to deliver data with greater than 99% accuracy — meaning fewer than 1% inaccurate or hallucinated data for the total contents of each search result returned — and a latency of 1-2 milliseconds per request.
It integrates natively with major data environments, allowing users to stream clean data directly into Databricks, Snowflake, S3, or Microsoft Fabric.
During the interview, Knorovich emphasized that Nimble is designed to be model-agnostic, working seamlessly with state-of-the-art models from OpenAI, Anthropic, and Google’s Gemini. This flexibility allows companies to use Nimble alongside their existing tech stack, whether they are running models in the cloud or on-premise for high-security environments like healthcare or banking.
Knorovich provided several real-world examples of how this “street-level” data impacts professional workflows. For instance, a real estate broker looking to expand into a new territory doesn’t need a high-level summary from a general-purpose AI.
“If you want to know what’s happening in the commercial real estate in Atlanta… you’re not looking for search that’s optimized for the millisecond,” Knorovich explained. “You’re looking for street-level, neighborhood-level information… data that you can actually see on a table or download to Excel”.
Another use case involves major financial institutions utilizing Nimble for “know your customer” (KYC) processes. By deploying an autonomous search agent, banks can cross-reference multiple public reports, criminal records, and address verifications to build a complete profile of a client before they even enter the building. The goal, Knorovich noted, is to provide the “external truth” that exists outside an organization’s internal firewalls.
Nimble differentiates itself from legacy scraping tools through a rigorous focus on governance and trust. The platform is “compliant-by-design,” holding certifications for SOC2 Type II, GDPR, CCPA, and HIPAA.
Pricing is structured to support both experimental startups and high-scale enterprise operations, aligned with the volume and depth of data retrieved.
“Pricing should be aligned with the value that the user is getting… therefore, we are pricing by the amount of searches that you’re running,” Knorovich said.
Search and answer APIs: Standard search inputs cost $1 per 1,000, while the “Answer” function—which provides reasoning based on search results—costs $4 per 1,000.
Managed services: For larger organizations, managed tiers start at $2,000 per month (Startup) and scale to $15,000 per month (Professional) for unlimited agents and priority support.
Proxy access: A network of over 1 million residential proxies is available starting at $7.50 per GB
The transition to agentic search has already been operationalized by several Fortune 500 companies and AI-native startups:
Julie Averill, former CIO at Lululemon, stated that pricing intelligence which once took weeks to review can now be responded to in minutes by putting control in the hands of an agent.
Itamar Fridman, CEO and Co-founder of Qodo, noted that the platform’s scalability was “crucial in developing more robust and reliable AI systems” by feeding LLMs with high-quality data.
Dennis Irorere, Data Engineer at TripAdvisor, highlighted that the platform simplifies the extraction of structured data from complex sources, which he described as “transformative” for his role.
Grips Intelligence reported scaling to over 45,000 e-commerce sites using Nimble’s Web API to deliver real-time pricing and product data.
Alta utilizes the platform to power millions of AI-driven go-to-market workflows daily, reporting 3–4× deeper context and >99% reliability
The $47 million Series B funding announced alongside the platform will be used to accelerate research in multi-agent web search and further develop the governed data layer.
The round saw participation from a wide ecosystem of investors, including Target Global, Square Peg, Hetz Ventures, Slow Ventures, R-Squared Ventures, J-Ventures, and InvestInData.
Andrew Ferguson, VP of Databricks Ventures, noted that Nimble complements their Data Intelligence Platform by providing a “real-time web data layer” that extends workflows beyond internal sources. This strategic investment signals a shift in the industry toward prioritizing “external truth” to ground mission-critical AI applications.
For Knorovich, the future of the web belongs to programmatic interaction. “Programmatic web search is where we are building towards,” he concluded. By moving away from legacy data vendors and brittle scrapers, Nimble aims to provide the real-time structure needed for AI to act with confidence in the real world.