Status Audio Pro X Earbuds Come With Triple Drivers And AI Speech Enhancement

The latest earbuds from Status Audio have been redesigned to make them more comfortable to wear. the Pro X earbuds have a triple-driver design and AI noise canceling.

Security Cameras Get Smarter With Matter 1.5.1 Update

Matter 1.5.1 focuses on making smart security cameras actually work. There’s updates to video streaming, storage, and PTZ controls are refining the Connectivity Standards Alliance’s latest spec.

AWS Deploys AI Agents To Do The Work Of DevOps And Security Teams

AWS launches two autonomous AI agents for DevOps and security that work without human oversight, challenging the economics of traditional engineering teams.

Sony And TCL Reveal More Intriguing Details Of Their New AV Joint Venture

And from its name through to its CEO and base of operations, it seems Sony product fans have much to feel relieved about.

Slack adds 30 AI features to Slackbot, its most ambitious update since the Salesforce acquisition

Slack today announced more than 30 new capabilities for Slackbot, its AI-powered personal agent, in what amounts to the most sweeping overhaul of the workplace messaging platform since Salesforce acquired it for $27.7 billion in 2021. The update transforms Slackbot from a simple conversational assistant into a full-spectrum enterprise agent that can take meeting notes across any video provider, operate outside the Slack application on users’ desktops, execute tasks through third-party tools via the Model Context Protocol (MCP), and even serve as a lightweight CRM for small businesses — all without requiring users to install anything new.

The announcement, timed to a keynote event that Salesforce CEO Marc Benioff is headlining Tuesday morning, arrives less than three months after Slackbot first became generally available on January 13 to Business+ and Enterprise+ subscribers. In that short window, Slack says the feature is on track to become the fastest-adopted product in Salesforce’s 27-year history, with some employees at customer organizations reporting they save up to 90 minutes per day. Inside Salesforce itself, teams claim savings of up to 20 hours per week, translating to more than $6.4 million in estimated productivity value.

“Slackbot is smart. It’s pleasant, and I think it’s endlessly useful,” Rob Seaman, Slack’s interim CEO and former chief product officer, told VentureBeat in an exclusive interview ahead of the announcement. “The upper bound of use cases is effectively limitless for it.”

The release signals Slack’s clearest bid yet to become what Seaman and the company’s leadership describe as an “agentic operating system” — a single surface through which workers interact with AI agents, enterprise applications, and one another. It also marks a direct challenge to Microsoft, which has spent the past two years embedding its Copilot assistant across the entirety of its productivity stack.

From simple chatbot to autonomous coworker: six new capabilities that redefine what Slackbot can do

The features announced Tuesday organize around several major capability areas, each designed to push Slackbot well beyond the role of a chatbot and into something closer to an autonomous digital coworker.

The most foundational may be what Slack is calling AI-Skills — reusable instruction sets that define the inputs, the steps, and the exact output format for a given task. Any team can build a skill once and deploy it on demand. Slackbot ships with a built-in library for common workflows, but users can also create their own. Critically, Slackbot can recognize when a user’s prompt matches an existing skill and apply it automatically, without being explicitly told to do so. “Think of these as topics or instructions — basically instructions for Slackbot to perform a repeat task that the user might want to do, that they can share with others, or a company might be able to set up for their whole company,” Seaman explained.

Deep research mode gives Slackbot the ability to conduct extended, multi-step investigations that take approximately four minutes to complete — a significant departure from the instant-response paradigm of most enterprise chatbots. Slack chose not to demonstrate this feature on stage at the keynote, Seaman said, precisely because its value lies in depth, not speed. MCP client integration, meanwhile, allows Slackbot to make tool calls into external systems through the Model Context Protocol, meaning it can now create Google Slides, draft Google Docs, and interact with the more than 2,600 apps in the Slack Marketplace and the 6,000-plus apps built over two decades for the Salesforce AppExchange. “We’re going all in on MCP for Slackbot,” Seaman said. “MCP clients and MCP servers are becoming very mature.”

Meeting intelligence allows Slackbot to listen to any meeting — not just Slack huddles, but calls on Zoom, Google Meet, or any other provider — by tapping into the user’s local audio through the desktop application. It captures discussions, summarizes decisions, surfaces action items, and because Slackbot is natively connected to Salesforce, it can log actions and update opportunities directly in the CRM. Slackbot on Desktop extends the agent outside the Slack container entirely, while voice mode adds text-to-speech and speech-to-text capabilities, with full speech-to-speech functionality under active development.

How Anthropic’s Claude powers Slackbot — and why keeping it affordable is the hardest part

Slackbot is built on Anthropic’s Claude model, a detail Seaman confirmed ahead of the keynote, where Anthropic’s leadership will appear alongside Slack executives on stage. The partnership underscores the deepening relationship between the two companies: Anthropic’s technology powers the reasoning layer, while Slack’s “context engineering” — the process of determining exactly which information from a user’s channels, files, and messages should be fed into the model’s context window — determines the quality and relevance of every response.

Managing the cost of that reasoning at enterprise scale is one of the most significant technical and financial challenges the team faces. Slackbot is included in Business+ and Enterprise+ plans at no additional consumption charge — a deliberate strategic choice that places the burden of cost optimization squarely on Slack’s engineering team rather than on customers.

“A lot of what we’ve done is in the context engineering phase, working really closely with Anthropic to make sure that we’re optimizing the RAG phase, optimizing our system prompts and everything, to make sure we’re getting the right amount of context into the context window and not obviously making fiscally irresponsible decisions for ourselves,” Seaman said. Starting in April, Slackbot will also become available in a limited sampling capacity to users on Slack’s free and Pro plans — a move designed to drive conversion up the pricing tiers.

Desktop AI and meeting transcription are powerful, but they raise hard questions about workplace surveillance

The extension of Slackbot beyond the Slack application window — particularly its ability to listen to meetings and view screen content — raises immediate questions about employee surveillance, especially in large enterprise environments where tens of thousands of workers may be subject to company-wide IT policies.

Seaman was emphatic that every capability is user-initiated and opt-in. Slackbot cannot listen to audio unless the user explicitly tells it to take meeting notes. It cannot view the desktop autonomously; in its current form, users must manually capture and share screenshots. And it inherits every permission the organization has already established in Slack.

“Everything is user opt-in. That’s a key tenet of Slack,” Seaman said. “It’s not rogue looking at your desktop or autonomously looking at your desktop. It’s very important to us, and very important to our enterprise customers.” On Slackbot’s memory feature — which allows it to learn user preferences and habits over time — Seaman said the company has no plans to make that data available to administrators. Users can flush their stored preferences at any time simply by telling Slackbot to do so.

Slack’s native CRM is a Trojan horse designed to capture startups before they outgrow it

Among the most important features in Tuesday’s release is a native CRM built directly into Slack, targeting small businesses that haven’t yet adopted a dedicated customer relationship management system.

The logic is straightforward: small companies typically adopt Slack early in their lifecycle, often on the free tier, and their customer conversations already happen in channels and direct messages. Slack’s native CRM reads those channels, understands the conversations, and automatically keeps deals, contacts, and call notes up to date. When companies are ready to scale, every record is already connected to Salesforce — no migrations, no starting over.

“The hypothesis is that along the way, companies are effectively going to have moments where a CRM might matter,” Seaman said. “Our goal is to make it available to them as a default, so as they are starting their company and their company is growing, it’s just right there for them. They don’t have to think about going off and procuring another tool.”

The feature also represents a response to a growing competitive threat. As the Wall Street Journal reported earlier this year, a wave of startups and individual developers have begun “vibe coding” their own lightweight CRMs, emboldened by the capabilities of large language models. By embedding CRM directly into Slack — the tool many of those same startups already depend on — Salesforce aims to make the procurement of a separate system unnecessary.

Slack says it has a context advantage over Microsoft and Google — but can it last?

The announcements arrive at a moment of intense competitive pressure. Microsoft has integrated Copilot across its entire productivity suite, giving it a distribution advantage that reaches into virtually every Fortune 500 company. Google has been similarly aggressive with Gemini across Workspace. And standalone AI tools from OpenAI to Anthropic threaten to fragment the enterprise AI experience.

Seaman took a measured approach when asked directly about competitive positioning, invoking a mantra he said Slack uses internally: “We are competitor aware, but customer obsessed.”

“I think there are two things that really stand out. One, we have a context advantage — if you look at the way people use Slack, they love it. They use it so much, constantly communicating with their colleagues, openly thinking, working in public project channels. Two is the user experience. We focus so much on how our product feels in people’s hands.”

That context advantage is real but not guaranteed. Slack’s strength lies in the richness and volume of conversational data flowing through its channels — data that, when fed into an AI model, can produce responses with a degree of organizational awareness that competitors struggle to match. But Microsoft’s Teams captures similar conversational data, and its deep integration with Windows, Office, and Azure gives it a systems-level advantage that Slack, operating as a single application, cannot easily replicate.

Starting this summer, every new Salesforce customer will receive Slack automatically provisioned and AI-powered from day one — a bundling play that ensures the messaging platform reaches the broadest possible enterprise audience. Salesforce reported $41.5 billion in revenue for fiscal year 2026, up 10% year-over-year, with Agentforce ARR reaching $800 million. But Wall Street has remained skeptical about whether AI will ultimately erode demand for traditional enterprise software, and Salesforce’s stock has underperformed the broader Nasdaq over the past year. More Slack users in more organizations gives AI-driven features more surface area to prove their value.

Slack’s biggest bet is that it can do everything without losing the simplicity that made it beloved

Tuesday’s launch is the first major product release under Seaman’s leadership. He assumed the interim CEO role after former Slack CEO Denise Dresser departed in December 2025 to become OpenAI’s first chief revenue officer — a move that signaled even Salesforce’s own executives felt the gravitational pull of frontier AI companies. The overarching thesis embedded in the announcement — that Slack is evolving from a messaging platform into an operating system for AI agents — is as risky as it is ambitious.

“One of the fundamental tenets of an operating system is that it obscures the complexity of the hardware from the end user,” Seaman said. “There are thousands of apps and agents out there, and that can be overwhelming. I think that’s our job — to be the OS that obscures that complexity, so you just use it like it’s a communication tool.”

When asked whether Slack risks losing its simplicity by trying to do everything, Seaman didn’t flinch. “There’s absolutely a risk,” he said. “That’s what keeps us up at night.”

It’s a remarkably candid admission from the leader of a platform that just launched 30 new features in a single day. The company that won the hearts of millions of workers with playful emoji reactions and frictionless messaging is now betting its future on meeting transcription, CRM pipelines, desktop agents, and enterprise orchestration. Whether Slack can absorb all of that ambition without losing the thing that made people love it in the first place isn’t just a product question — it’s the $27.7 billion question that Salesforce is still trying to answer.

Apple Launches AirPods Max 2: Here’s What’s New — And What’s Not

Apple’s latest release is the newest pair of over-ear headphones: AirPods Max 2. Here’s all you need to know.

Claude Code’s source code appears to have leaked: here’s what we know

Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public.

A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning.

By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers.

For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product.

Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year.

With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.

We’ve reached out to Anthropic for an official statement on the leak and will update when we hear back.

The anatomy of agentic memory

The most significant takeaway for competitors lies in how Anthropic solved “context entropy”—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity.

The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional “store-everything” retrieval.

As analyzed by developers like @himanshustwts, the architecture utilizes a “Self-Healing Memory” system.

At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.

Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.

This “Strict Write Discipline”—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.

For competitors, the “blueprint” is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a “hint,” requiring the model to verify facts against the actual codebase before proceeding.

KAIROS and the autonomous daemon

The leak also pulls back the curtain on KAIROS,” the Ancient Greek concept of “at the right time,” a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode.

While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream.

In this mode, the agent performs “memory consolidation” while the user is idle. The autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.

This background maintenance ensures that when the user returns, the agent’s context is clean and highly relevant.

The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent’s “train of thought” from being corrupted by its own maintenance routines.

Unreleased internal models and performance metrics

The source code provides a rare look at Anthropic’s internal model roadmap and the struggles of frontier development.

The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing.

Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4.

Developers also noted an “assertiveness counterweight” designed to prevent the model from becoming too aggressive in its refactors.

For competitors, these metrics are invaluable; they provide a benchmark of the “ceiling” for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve.

“Undercover” Claude

Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”

While Anthropic may use this for internal “dog-fooding,” it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure.

The logic ensures that no model names (like “Tengu” or “Capybara”) or AI attributions leak into public git logs—a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development.

The fallout has just begun

The “blueprint” is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering.

Even the hidden “Buddy” system—a Tamagotchi-style terminal pet with stats like CHAOS and SNARK—shows that Anthropic is building “personality” into the product to increase user stickiness.

For the wider AI market, the leak effectively levels the playing field for agentic orchestration.

Competitors can now study Anthropic’s 2,500+ lines of bash validation logic and its tiered memory structures to build “Claude-like” agents with a fraction of the R&D budget.

As the “Capybara” has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence.

What Claude Code users and enterprise customers should do now about the alleged leak

While the source code leak itself is a major blow to Anthropic’s intellectual property, it poses a specific, heightened security risk for you as a user.

By exposing the “blueprints” of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts.

Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to “trick” Claude Code into running background commands or exfiltrating data before you ever see a trust prompt.

The most immediate danger, however, is a concurrent, separate supply-chain attack on the axios npm package, which occurred hours before the leak.

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer (curl -fsSL https://claude.ai/install.sh | bash) as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain.

The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86.

Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the .claude/config.json and any custom hooks.

As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent’s internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.

Meet The $580 Million Startup Making AI Models To Fight Artificial Hackers

AI cybersecurity firm Depthfirst has scored $120 million in funding to build a kind of “general security intelligence” that can defend against malicious AI.

Softr launches AI-native platform to help nontechnical teams build business apps without code

Softr, the Berlin-based no-code platform used by more than one million builders and 7,000 organizations including Netflix, Google, and Stripe, today launched what it calls an AI-native platform — a bet that the explosive growth of AI-powered app creation tools has produced a market full of impressive demos but very little production-ready business software.

The company’s new AI Co-Builder lets non-technical users describe in plain language the software they need, and the platform generates a fully integrated system — database, user interface, permissions, and business logic included — connected and ready for real-world deployment immediately. The move marks a fundamental evolution for a company that spent five years building a no-code business before layering AI on top of what it describes as a proven infrastructure of constrained, pre-built building blocks.

“Most AI app-builders stop at the shiny demo stage,” Softr Co-Founder and CEO Mariam Hakobyan told VentureBeat in an exclusive interview ahead of the launch. “A lot of the time, people generate calculators, landing pages, and websites — and there are a huge number of use cases for those. But there is no actual business application builder, which has completely different needs.”

The announcement arrives at a moment when the AI app-building market finds itself at an inflection point. A wave of so-called “vibe coding” platforms — tools like Lovable, Bolt, and Replit that generate application code from natural language prompts — have captured developer mindshare and venture capital over the past 18 months. But Hakobyan argues those tools fundamentally misserve the audience Softr is chasing: the estimated billions of non-technical business users inside companies who need custom operational software but lack the skills to maintain AI-generated code when it inevitably breaks.

Why AI-generated app prototypes keep failing when real business data is involved

The core tension Softr is trying to resolve is one that has plagued the AI app-building category since its inception: the gap between what looks good in a demo and what actually works when real users, real data, and real security requirements enter the picture.

Business software — client portals, CRMs, internal operational tools, inventory management systems — requires authentication, role-based permissions, database integrity, and workflow automation that must function reliably every single time. When an AI-generated prototype fails in these areas, fixing it typically requires a developer, which defeats the purpose of the no-code promise entirely.

“One prompt might break 10 previous steps that you’ve already completed,” Hakobyan said, describing the experience non-technical users face on vibe coding platforms. “You keep prompting, keep trying to fix errors that the AI generated, and you end up maintaining something you didn’t even sign up for in the first place.”

This critique targets a real structural limitation in how many AI app builders work today. Platforms that fully rely on AI to generate application code from scratch leave users with a codebase they cannot read, debug, or maintain without technical expertise. To connect those generated apps to real databases, login systems, or third-party services, users often must integrate tools like Supabase and make API calls — tasks that effectively require them to become developers. Softr’s position is that these platforms have replaced one form of coding with another, swapping programming languages for English-language prompts that carry all the same fragility.

How Softr’s building block architecture avoids the hallucination problem that plagues AI code generators

Rather than generating raw code, Softr’s platform uses what Hakobyan describes as “proven and structured building blocks” — pre-built components for standard application functions like Kanban boards, list views, tables, user authentication, and permissions. The AI interprets a user’s requirements, guides them through targeted questions about login functionality, permission types, and user roles, then assembles these tested building blocks in a constrained, intelligent way. Only when a user requests functionality that falls outside the standard 80% covered by these blocks does the system build a custom component with AI.

“It basically never hallucinates, because it’s all built on an infrastructure that’s secure and constrained,” Hakobyan explained. “It doesn’t generate code or leave you with code, because underneath, it uses our existing building block model.”

The result is not a code repository. It is a live application running on Softr’s infrastructure, with a visual editor that users can continue to modify — either by prompting the AI further or by directly manipulating the no-code interface. This dual-editing model is a deliberate design decision that Hakobyan frames as the platform’s core differentiator. “It almost combines the best of both worlds of AI and no code, and really lets users to either continue iterating with AI or then continue working with the app visually, which is much simpler and easier and for them to have control,” she said.

Core platform foundations — authentication, user roles, permissions, hosting, and SSL — are built in from the start, eliminating what Hakobyan calls the “blank canvas problem” that plagues vibe coding platforms, where every user must architect fundamental application infrastructure from scratch via prompts. The platform uses a SaaS subscription pricing model, with each plan including a set number of AI credits and the option to purchase more — though the visual editor means users don’t always need to consume credits, since direct manipulation of the no-code interface is often faster and more precise.

Inside the five-year journey from Airtable interface to profitable AI-native platform

Softr’s journey to this moment has been a gradual, disciplined expansion that stands in contrast to the rapid fundraising cycles common among AI startups. The company launched in 2020 as a no-code interface layer on top of Airtable, the popular enterprise database product. Co-founded by Armenian entrepreneurs Hakobyan and CTO Artur Mkrtchyan, the startup raised a $2.2 million seed round in early 2021 led by Atlantic Labs, followed by a $13.5 million Series A in January 2022 led by FirstMark Capital.

What happened next is notable for its restraint. Softr has not raised additional capital since that 2022 Series A. Instead, it has grown to profitability. “We have been profitable for the past whole year, and we’re about 50 people team,” Hakobyan told VentureBeat. “We have grown to eight-digit revenue fully PLG, no sales team, mostly through word of mouth, organic growth.”

That financial profile — eight-figure annual revenue, profitable, 50 employees, no sales team — is striking in a market where many AI-powered competitors are spending heavily to acquire users. Over the past year, the company has steadily expanded its technical capabilities, moving beyond its original Airtable dependency to support Google Sheets, Notion, PostgreSQL, MySQL, MariaDB, and other databases.

In February 2025, TechCrunch reported on this expansion, with Hakobyan explaining that many potential customers had “data scattered across many different tools” and needed a single platform to unify that fragmented infrastructure. Today, Softr offers 15-plus native integrations with external databases, plus a REST API connector for additional data sources. The new AI Co-Builder represents the culmination of this multi-year evolution — combining the building block architecture, the broad data integration layer, and a new AI interface into a single platform for business application creation.

How Softr positions itself against both no-code incumbents and vibe coding startups

Softr’s launch lands in a rapidly fragmenting competitive landscape, and Hakobyan is deliberate about where she draws the lines. On one side sit traditional no-code platforms like Bubble, which offer deep customization and design freedom but require users to build everything from scratch — database schemas, pixel-level layouts, authentication systems — creating a steeper learning curve. A TechRadar review noted that while Softr’s blocks don’t offer the same design freedom as Bubble, the platform’s simplicity makes it accessible to genuinely non-technical users. In a comparison published by Business Insider Africa in June, Softr was characterized as offering “minimal learning curve, especially for internal or web-based tools,” though with limitations in scalability for more complex applications.

On the other side sit the AI-first code generation tools that Hakobyan views as fundamentally misaligned with business software requirements. “Before people were coding, then they were coding through APIs, now they are coding almost through a human language interface, right, just by with English,” Hakobyan said. “But what Softr does is fundamentally different. It abstracts all of that and makes the creation simple.”

She also distinguishes Softr from developer-focused AI coding assistants like Anthropic’s Claude Code, positioning those as tools that make professional developers more efficient rather than tools that enable non-developers to build software.”There are amazing tools for developers — that’s great. The target audience is developers,” Hakobyan acknowledged. Instead, Softr targets a specific and potentially enormous market: businesses that need custom internal and external-facing operational tools and currently rely on spreadsheets, email, or rigid off-the-shelf software that doesn’t match their actual processes. Hakobyan described use cases ranging from asset production workflows for film companies — where internal teams, external agencies, and approvers interact across a multi-stage process — to lightweight CRM replacements for teams that don’t need the full complexity of Salesforce. “There’s not even a vertical solution for this type of process,” she said. “It’s very custom to each organization.”

What Netflix, Google, and thousands of non-tech companies actually build on the platform

Many of Softr’s highest-profile customers — Netflix, Google, Stripe, UPS — were using the platform before the AI Co-Builder even existed, building on the company’s original no-code foundation. But the user base extends far beyond Silicon Valley. Non-tech organizations in real estate, manufacturing, and logistics represent a significant portion of Softr’s customer base — companies that often still manage core processes with pen, paper, and spreadsheets.

“A lot of these companies — you might think they already have the solutions, but they don’t,” Hakobyan noted. “In tech companies, most of the time, CRM and project management tools are already established. But most of our customers are using Softr for internal operational tooling or workflow tooling, where the use case involves lots of different departments and even external parties.”

The company is SOC 2 (Type II) compliant and GDPR compliant, with additional compliance capabilities in development. Hakobyan noted that auditing and governance functionality can be built directly into applications using the platform’s database and workflow tools, with a native logging and auditing system expected to ship in the near term. 

Softr’s billion-user ambition and the Canva analogy that explains its strategy

Softr’s stated mission — to empower billions of business users to create production-ready software — is audacious, but Hakobyan frames the AI Co-Builder launch as a fundamental acceleration of the trajectory the company has been on for five years. “Everything people would have to spend hours doing is done within five minutes,” she said. “And obviously that helps more people to actually build real software.”

The company plans to layer a product-led sales motion on top of its existing PLG engine, targeting larger enterprise customers with higher average contract values. This represents a deliberate strategic expansion from the small and mid-sized businesses that have formed Softr’s core customer base — a segment that TechCrunch identified as natural Softr customers as far back as the company’s 2022 Series A, given that those firms are most likely to be priced out of the competitive developer market.

Hakobyan draws an analogy that has apparently become common among the company’s users: Softr as “Canva for web apps.” Just as Canva made professional design accessible to non-designers, Softr aims to make business software creation accessible to non-developers. Whether the company can translate its disciplined growth and profitable foundation into a platform that genuinely serves that enormous addressable market remains to be seen. Softr faces intensifying competition from both traditional no-code incumbents adding AI capabilities and well-funded AI-native startups approaching the problem from the code-generation side.

But Softr enters this next phase with advantages that many competitors lack: a profitable business, a million-user base already shipping production software, and an architectural approach that treats AI as an accelerant layered on top of proven infrastructure rather than an unpredictable replacement for it. “No code alone had its own problems, and AI alone also just can’t do the job,” Hakobyan said. “The combination is what’s going to be making it really powerful.”

For the past five years, Softr bet that the hardest part of software wasn’t writing the code — it was getting the databases, permissions, and business logic right. Now the company is betting that in the age of AI, that conviction matters more than ever. The millions of business users who have never written a line of code but desperately need custom software are about to find out whether Softr is right.

Nvidia-backed ThinkLabs AI raises $28 million to tackle a growing power grid crunch

ThinkLabs AI, a startup building artificial intelligence models that simulate the behavior of the electric grid, announced today that it has closed a $28 million Series A financing round led by Energy Impact Partners (EIP), one of the largest energy transition investment firms in the world. Nvidia’s venture capital arm NVentures and Edison International, the parent company of Southern California Edison, also participated in the round.

The funding marks a significant escalation in the race to apply AI not just to software and content generation, but to the physical infrastructure that powers modern life. While most AI investment headlines have centered on large language models and generative tools, ThinkLabs is pursuing a different and arguably more consequential application: using physics-informed AI to model the behavior of electrical grids in real time, compressing engineering studies that once took weeks or months into minutes.

“We are dead focused on the grid,” ThinkLabs CEO Josh Wong told VentureBeat in an exclusive interview ahead of the announcement. “We do AI models to model the grid, specifically transmission and distribution power flow related modeling. We can calculate things like interconnection of large loads — like data centers or electric vehicle charging — and understand the impact they have on the grid.”

The round drew participation from a deep bench of returning investors, including GE Vernova, Powerhouse Ventures, Active Impact Investments, Blackhorn Ventures, and Amplify Capital, along with an unnamed large North American investor-owned utility. The company initially set out to raise less than $28 million, according to Wong, but strong demand from strategic partners pushed the round higher.

“This was way oversubscribed,” Wong said. “We attracted the right ecosystem partners and the right capital partners to grow with, and that’s how we ended up at $28 million.”

Why surging electricity demand is breaking the grid’s legacy planning tools

The timing of the raise is no coincidence. U.S. electricity demand is projected to grow 25% by 2030, according to consultancy ICF International, driven largely by AI data centers, electrified transportation, and the broader push toward building and vehicle electrification. That surge is crashing into a grid that was engineered decades ago for a fundamentally different set of demands — and utilities are scrambling to keep up.

The core problem is one of computational capacity. When a utility needs to understand what will happen to its grid if a large data center connects to a particular substation, or if a cluster of EV chargers goes live in a residential neighborhood, engineers must run power flow simulations — complex calculations that model how electricity moves through the network. Those studies have traditionally relied on legacy software tools from companies like Siemens, GE, and Schneider Electric, and they can take weeks or months to complete for a single scenario.

ThinkLabs’ approach replaces that bottleneck with physics-informed AI models that learn from the same engineering simulators but can then run orders of magnitude faster. According to the company, its platform can compress a month-long grid study into under three minutes and run 10 million scenarios in 10 minutes, while maintaining greater than 99.7% accuracy on grid power flow calculations.

Wong draws a sharp distinction between what ThinkLabs does and the generative AI models that dominate public discourse. “We’re not hallucinating the heck out of things,” he said. “We are talking about engineering calculations here. I would really compare this to a computation of fluid dynamics, or like F1 cars, or aerospace, or climate models. We do have a source of truth from existing physics-based engineering models.”

That source of truth is crucial. ThinkLabs trains its AI on the outputs of first-principles physics simulators — the same tools utilities already trust — and then validates its models against those simulators. The result, Wong argues, is an AI system that is not only fast but fully explainable and auditable, a critical requirement in an industry where a miscalculation can cause blackouts or damage physical infrastructure.

How ThinkLabs’ three-phase power flow analysis differs from every other grid AI startup

The competitive landscape for AI in grid management has grown crowded over the past two years, with startups and incumbents alike racing to apply machine learning to utility workflows. But Wong contends that ThinkLabs occupies a fundamentally different position from most of its competitors.

“As far as we know, we’re the only ones actually doing AI-native grid simulation analysis,” he said. “Others might be using AI for forecasting, load disaggregation, or local energy management, but fundamentally, they’re not calculating a power flow.”

What ThinkLabs performs is a full three-phase AC power flow analysis — examining every node and bus on the electric grid to determine real and reactive power levels, line flows, and voltages. This is the same type of analysis that utility engineers perform today using legacy tools, but ThinkLabs can deliver it at a speed and scale that those tools simply cannot match.

The distinction matters because utilities make capital investment decisions — worth billions of dollars — based on exactly these types of studies. If a power flow analysis shows that a proposed data center connection will overload a transmission line, the utility may need to build new infrastructure at enormous cost. But if the analysis can also suggest alternative solutions — battery storage placement, load flexibility scheduling, or topology optimization — the utility can potentially avoid or defer those capital expenditures.

“With many utilities, existing tools will basically show them all the problems, but they can only address solutions by trial and error,” Wong explained. “With AI, we can use reinforcement learning to generate more creative solutions, but also very effectively weigh the pros and cons of each of these solutions.”

Inside ThinkLabs’ strategic relationships with NVIDIA, Edison, and Microsoft

The presence of NVentures in the round — Nvidia’s venture arm does not write many checks — signals a deeper strategic relationship that extends well beyond capital. Wong confirmed that ThinkLabs works extensively within the Nvidia ecosystem on the energy and utility side, leveraging CUDA for GPU-accelerated computation and integrating Nvidia’s Earth-2 climate simulation platform into ThinkLabs’ probabilistic forecasting and risk-adjusted analysis pipelines.

“We are what one utility mentioned as the only high-intensity GPU workload for the OT side — the operational technology side — that’s planning and operations,” Wong said. He added that ThinkLabs is also in discussions with Nvidia’s Omniverse team about additional utility use cases, though those efforts are still early.

Edison International’s participation carries a different kind of strategic weight. In January 2026, ThinkLabs publicly announced results from a collaboration with Southern California Edison (SCE), Edison International’s utility subsidiary, that demonstrated the real-world capabilities of its platform. As the Los Angeles Times reported at the time, the collaboration showed that ThinkLabs’ AI could train in minutes per circuit, process a full year of hourly power-flow data in under three minutes across more than 100 circuits, and produce engineering reports with bridging-solution recommendations in under 90 seconds — work that previously required dedicated engineers an average of 30 to 35 days.

In today’s announcement, Edison International’s Sergej Mahnovski, Managing Director of Strategy, Technology and Innovation, reinforced that urgency: “We must rapidly transition from legacy planning tools and processes to meet the growing demands on the electric grid — new AI-native solutions are needed to transform our capabilities.”

ThinkLabs also works closely with Microsoft, which hosted a webinar in mid-2025 featuring Wong alongside representatives from Southern Company, EPRI, and Microsoft’s own energy team. The SCE collaboration was built on Microsoft Azure AI Foundry, situating ThinkLabs within the cloud infrastructure that many large utilities already use.

The 20-year career path that led from Toronto Hydro to an autonomous grid startup

Wong’s biography reads like a deliberate preparation for this exact moment. He has spent more than 20 years in the utility industry, starting his career at Toronto Hydro before founding Opus One Solutions in 2012 — a smart-grid software company that he grew to over 100 employees serving customers across eight countries before selling it to GE in 2022, as previously reported by BetaKit.

After the acquisition, Wong joined what became GE Vernova and was asked to develop the company’s “grid of the future” roadmap. The thesis he developed there — that the grid is the central bottleneck to economic growth, electrification, and national security, and that autonomous grid orchestration powered by AI is the solution — became the intellectual foundation for ThinkLabs.

“I was pulling together the thesis that we need to electrify, but the grid is really at the center of attention,” Wong said. “The conclusion is we need to drive towards greater autonomy. We talk a lot about autonomous cars, but I would argue that autonomous grids is the much more pressing priority.”

ThinkLabs was incubated inside GE Vernova and spun out as an independent company in April 2024, coinciding with a $5 million seed round co-led by Powerhouse Ventures and Active Impact Investments, as reported by GlobeNewswire at the time. GE Vernova remains a shareholder and strategic partner. Wong is the sole founder.

The team composition reflects the company’s dual identity. “Half of our team are power system PhDs, but the other half are the AI folks — people who have been looking at hyper-scalable AI infrastructure platforms and MLOps for other industries,” Wong said. “We have really been blending the two.”

How ThinkLabs doubled its utility customer base in a single quarter

Utilities are famously among the most conservative technology buyers in the world, with procurement cycles that can stretch years and layers of regulatory oversight that slow adoption. Wong acknowledges this reality but says the landscape is shifting faster than many observers realize.

“I have noticed sales cycles really accelerating,” he said. “It’s still long and depends on which utility and how big the deal is, but we have been witnessing firsthand sales cycles going from the traditional one to two years to a shortest two to three months.”

On the commercial side, Wong declined to share specific revenue figures but offered several data points that suggest meaningful traction. ThinkLabs is working with more than 10 utilities on AI-native grid simulation for planning and operations, he said, and the company doubled its customer accounts in the first quarter of 2026 alone.

“So not one or two, but we’re working with 10-plus utilities,” Wong said. “Things have really picked up pace even before this A round.”

The company primarily targets investor-owned utilities and system operators — the organizations that own and operate the grid — though Wong noted that AI is also beginning to democratize grid simulation capabilities for smaller utilities that previously lacked the engineering resources to run sophisticated analyses.

Wong said the primary use of funds will go toward advancing the product to enterprise grade and expanding the range of use cases the platform supports. The company sees a significant land-and-expand opportunity within individual utility accounts — moving from modeling a small region to training AI models across entire states or multi-state territories within a single customer.

EIP’s involvement as lead investor carries particular significance in this market. The firm is backed by more than half of North America’s investor-owned utilities, giving ThinkLabs a direct line into the executive suites of the customers it is trying to reach. “Utilities are being asked to add capacity on timelines the industry has never seen before, and the stakes extend far beyond the energy sector,” Sameer Reddy, Managing Partner at EIP, said in the press release.

What a 99.7% accuracy rate actually means for critical grid infrastructure

Any conversation about applying AI to critical infrastructure inevitably confronts the question of failure modes. A hallucination in a chatbot is an embarrassment; a miscalculation in a grid power flow analysis could contribute to equipment damage or widespread outages.

Wong addressed this head-on. The 99.7% accuracy figure, he explained, is an average across large-volume planning studies — specifically 8,760-hour analyses (every hour of the year) projected across three to 10 years with multiple sensitivity scenarios. For planning purposes, he argued, this level of accuracy is not only sufficient but may actually exceed what traditional methods deliver in practice.

“If you look at a source of truth, the data quality is actually the biggest limiting factor, not the accuracy of these AI models,” he said. “When we bring in traditional engineering analysis and actually snap it with telemetry — metering data, SCADA data — I would actually argue AI is far more accurate because it is data driven on actual measurements, rather than hypothetical planning analysis based on scenarios.”

For more critical real-time applications, ThinkLabs deploys what Wong called “hybrid models” that blend AI computation with traditional physics-based simulation. In the most stringent use cases, the AI handles roughly 99% of the computational workload before handing off to a physics-based engine for final validation — a technique Wong described as using AI to “warm start” the simulation.

The company also monitors for model drift and maintains strict training boundaries. “We’re not like ChatGPT training the internet here,” Wong said. “We’re training on the possibility of grid conditions. And if we do see a condition where we did not train, or outside of our training boundary, we can always run on-demand training on those certain solution spaces.”

Why ThinkLabs says its value proposition survives even if the data center boom slows down

The bullish case for ThinkLabs — and for grid-focused AI more broadly — rests heavily on the assumption that electricity demand will surge dramatically over the coming decade. But some analysts have begun questioning whether those projections are inflated, particularly if AI investment cycles cool and data center build-outs decelerate.

Wong argued that his company’s value proposition is resilient to that scenario. Even without dramatic load growth, he said, utilities face a fundamental modernization challenge. They have been using tools and processes from the 1990s and 2000s, and the workforce that knows how to operate those tools is retiring at an alarming rate.

“Workforce renewal is a big factor,” he said. “These AI tools not only modernize the tool itself, but also modernize culture and transformation and become major points of retention for the next generation.”

He also pointed to energy affordability as a driver that exists independent of load growth projections. If utilities continue to plan based on worst-case deterministic scenarios — building enough infrastructure to cover every conceivable contingency — consumer rates will become unmanageable. AI-powered probabilistic analysis, Wong argued, allows utilities to make smarter, more cost-effective decisions regardless of whether the most aggressive demand forecasts materialize.

“A large part of this AI is not only enabling workload, but how do we act with intelligence — going from worst-case to time-series analysis, from deterministic to probabilistic and stochastic analysis, and also coming up with solutions,” he said.

Wong frames the broader opportunity with an analogy that captures both the simplicity and the ambition of what ThinkLabs is attempting. For decades, he said, the utility industry’s default response to grid constraints has been the equivalent of building wider highways — more wires, more copper, more steel. ThinkLabs wants to be the navigation system that reroutes traffic instead.

“In the past, when we drive, we always drive with what we are familiar with — just the big roads,” he said. “But with AI, we can optimize the traffic patterns to drive on much more effective routes. In this case, it might be a mix of wires, flexibility, batteries, and operational decisions.”

Whether ThinkLabs can deliver on that vision at the scale the grid demands remains an open question. But Wong, who has spent two decades building and selling grid software companies, is not thinking in terms of incremental improvement. He sees a narrow window — measured in years, not decades — during which the foundational AI infrastructure for the grid will be built, and whoever builds it will shape the energy system for a generation.

“I truly believe the next two years of AI development for the grid will dictate the next decades of what can happen to the grid,” Wong said. “It’s really here now.”

The grid, in other words, is getting a copilot. The question is no longer whether utilities will trust AI with their most critical engineering decisions, but how quickly they can afford not to.