ThinkLabs AI, a startup building artificial intelligence models that simulate the behavior of the electric grid, announced today that it has closed a $28 million Series A financing round led by Energy Impact Partners (EIP), one of the largest energy transition investment firms in the world. Nvidia’s venture capital arm NVentures and Edison International, the parent company of Southern California Edison, also participated in the round.
The funding marks a significant escalation in the race to apply AI not just to software and content generation, but to the physical infrastructure that powers modern life. While most AI investment headlines have centered on large language models and generative tools, ThinkLabs is pursuing a different and arguably more consequential application: using physics-informed AI to model the behavior of electrical grids in real time, compressing engineering studies that once took weeks or months into minutes.
“We are dead focused on the grid,” ThinkLabs CEO Josh Wong told VentureBeat in an exclusive interview ahead of the announcement. “We do AI models to model the grid, specifically transmission and distribution power flow related modeling. We can calculate things like interconnection of large loads — like data centers or electric vehicle charging — and understand the impact they have on the grid.”
The round drew participation from a deep bench of returning investors, including GE Vernova, Powerhouse Ventures, Active Impact Investments, Blackhorn Ventures, and Amplify Capital, along with an unnamed large North American investor-owned utility. The company initially set out to raise less than $28 million, according to Wong, but strong demand from strategic partners pushed the round higher.
“This was way oversubscribed,” Wong said. “We attracted the right ecosystem partners and the right capital partners to grow with, and that’s how we ended up at $28 million.”
The timing of the raise is no coincidence. U.S. electricity demand is projected to grow 25% by 2030, according to consultancy ICF International, driven largely by AI data centers, electrified transportation, and the broader push toward building and vehicle electrification. That surge is crashing into a grid that was engineered decades ago for a fundamentally different set of demands — and utilities are scrambling to keep up.
The core problem is one of computational capacity. When a utility needs to understand what will happen to its grid if a large data center connects to a particular substation, or if a cluster of EV chargers goes live in a residential neighborhood, engineers must run power flow simulations — complex calculations that model how electricity moves through the network. Those studies have traditionally relied on legacy software tools from companies like Siemens, GE, and Schneider Electric, and they can take weeks or months to complete for a single scenario.
ThinkLabs’ approach replaces that bottleneck with physics-informed AI models that learn from the same engineering simulators but can then run orders of magnitude faster. According to the company, its platform can compress a month-long grid study into under three minutes and run 10 million scenarios in 10 minutes, while maintaining greater than 99.7% accuracy on grid power flow calculations.
Wong draws a sharp distinction between what ThinkLabs does and the generative AI models that dominate public discourse. “We’re not hallucinating the heck out of things,” he said. “We are talking about engineering calculations here. I would really compare this to a computation of fluid dynamics, or like F1 cars, or aerospace, or climate models. We do have a source of truth from existing physics-based engineering models.”
That source of truth is crucial. ThinkLabs trains its AI on the outputs of first-principles physics simulators — the same tools utilities already trust — and then validates its models against those simulators. The result, Wong argues, is an AI system that is not only fast but fully explainable and auditable, a critical requirement in an industry where a miscalculation can cause blackouts or damage physical infrastructure.
The competitive landscape for AI in grid management has grown crowded over the past two years, with startups and incumbents alike racing to apply machine learning to utility workflows. But Wong contends that ThinkLabs occupies a fundamentally different position from most of its competitors.
“As far as we know, we’re the only ones actually doing AI-native grid simulation analysis,” he said. “Others might be using AI for forecasting, load disaggregation, or local energy management, but fundamentally, they’re not calculating a power flow.”
What ThinkLabs performs is a full three-phase AC power flow analysis — examining every node and bus on the electric grid to determine real and reactive power levels, line flows, and voltages. This is the same type of analysis that utility engineers perform today using legacy tools, but ThinkLabs can deliver it at a speed and scale that those tools simply cannot match.
The distinction matters because utilities make capital investment decisions — worth billions of dollars — based on exactly these types of studies. If a power flow analysis shows that a proposed data center connection will overload a transmission line, the utility may need to build new infrastructure at enormous cost. But if the analysis can also suggest alternative solutions — battery storage placement, load flexibility scheduling, or topology optimization — the utility can potentially avoid or defer those capital expenditures.
“With many utilities, existing tools will basically show them all the problems, but they can only address solutions by trial and error,” Wong explained. “With AI, we can use reinforcement learning to generate more creative solutions, but also very effectively weigh the pros and cons of each of these solutions.”
The presence of NVentures in the round — Nvidia’s venture arm does not write many checks — signals a deeper strategic relationship that extends well beyond capital. Wong confirmed that ThinkLabs works extensively within the Nvidia ecosystem on the energy and utility side, leveraging CUDA for GPU-accelerated computation and integrating Nvidia’s Earth-2 climate simulation platform into ThinkLabs’ probabilistic forecasting and risk-adjusted analysis pipelines.
“We are what one utility mentioned as the only high-intensity GPU workload for the OT side — the operational technology side — that’s planning and operations,” Wong said. He added that ThinkLabs is also in discussions with Nvidia’s Omniverse team about additional utility use cases, though those efforts are still early.
Edison International’s participation carries a different kind of strategic weight. In January 2026, ThinkLabs publicly announced results from a collaboration with Southern California Edison (SCE), Edison International’s utility subsidiary, that demonstrated the real-world capabilities of its platform. As the Los Angeles Times reported at the time, the collaboration showed that ThinkLabs’ AI could train in minutes per circuit, process a full year of hourly power-flow data in under three minutes across more than 100 circuits, and produce engineering reports with bridging-solution recommendations in under 90 seconds — work that previously required dedicated engineers an average of 30 to 35 days.
In today’s announcement, Edison International’s Sergej Mahnovski, Managing Director of Strategy, Technology and Innovation, reinforced that urgency: “We must rapidly transition from legacy planning tools and processes to meet the growing demands on the electric grid — new AI-native solutions are needed to transform our capabilities.”
ThinkLabs also works closely with Microsoft, which hosted a webinar in mid-2025 featuring Wong alongside representatives from Southern Company, EPRI, and Microsoft’s own energy team. The SCE collaboration was built on Microsoft Azure AI Foundry, situating ThinkLabs within the cloud infrastructure that many large utilities already use.
Wong’s biography reads like a deliberate preparation for this exact moment. He has spent more than 20 years in the utility industry, starting his career at Toronto Hydro before founding Opus One Solutions in 2012 — a smart-grid software company that he grew to over 100 employees serving customers across eight countries before selling it to GE in 2022, as previously reported by BetaKit.
After the acquisition, Wong joined what became GE Vernova and was asked to develop the company’s “grid of the future” roadmap. The thesis he developed there — that the grid is the central bottleneck to economic growth, electrification, and national security, and that autonomous grid orchestration powered by AI is the solution — became the intellectual foundation for ThinkLabs.
“I was pulling together the thesis that we need to electrify, but the grid is really at the center of attention,” Wong said. “The conclusion is we need to drive towards greater autonomy. We talk a lot about autonomous cars, but I would argue that autonomous grids is the much more pressing priority.”
ThinkLabs was incubated inside GE Vernova and spun out as an independent company in April 2024, coinciding with a $5 million seed round co-led by Powerhouse Ventures and Active Impact Investments, as reported by GlobeNewswire at the time. GE Vernova remains a shareholder and strategic partner. Wong is the sole founder.
The team composition reflects the company’s dual identity. “Half of our team are power system PhDs, but the other half are the AI folks — people who have been looking at hyper-scalable AI infrastructure platforms and MLOps for other industries,” Wong said. “We have really been blending the two.”
Utilities are famously among the most conservative technology buyers in the world, with procurement cycles that can stretch years and layers of regulatory oversight that slow adoption. Wong acknowledges this reality but says the landscape is shifting faster than many observers realize.
“I have noticed sales cycles really accelerating,” he said. “It’s still long and depends on which utility and how big the deal is, but we have been witnessing firsthand sales cycles going from the traditional one to two years to a shortest two to three months.”
On the commercial side, Wong declined to share specific revenue figures but offered several data points that suggest meaningful traction. ThinkLabs is working with more than 10 utilities on AI-native grid simulation for planning and operations, he said, and the company doubled its customer accounts in the first quarter of 2026 alone.
“So not one or two, but we’re working with 10-plus utilities,” Wong said. “Things have really picked up pace even before this A round.”
The company primarily targets investor-owned utilities and system operators — the organizations that own and operate the grid — though Wong noted that AI is also beginning to democratize grid simulation capabilities for smaller utilities that previously lacked the engineering resources to run sophisticated analyses.
Wong said the primary use of funds will go toward advancing the product to enterprise grade and expanding the range of use cases the platform supports. The company sees a significant land-and-expand opportunity within individual utility accounts — moving from modeling a small region to training AI models across entire states or multi-state territories within a single customer.
EIP’s involvement as lead investor carries particular significance in this market. The firm is backed by more than half of North America’s investor-owned utilities, giving ThinkLabs a direct line into the executive suites of the customers it is trying to reach. “Utilities are being asked to add capacity on timelines the industry has never seen before, and the stakes extend far beyond the energy sector,” Sameer Reddy, Managing Partner at EIP, said in the press release.
Any conversation about applying AI to critical infrastructure inevitably confronts the question of failure modes. A hallucination in a chatbot is an embarrassment; a miscalculation in a grid power flow analysis could contribute to equipment damage or widespread outages.
Wong addressed this head-on. The 99.7% accuracy figure, he explained, is an average across large-volume planning studies — specifically 8,760-hour analyses (every hour of the year) projected across three to 10 years with multiple sensitivity scenarios. For planning purposes, he argued, this level of accuracy is not only sufficient but may actually exceed what traditional methods deliver in practice.
“If you look at a source of truth, the data quality is actually the biggest limiting factor, not the accuracy of these AI models,” he said. “When we bring in traditional engineering analysis and actually snap it with telemetry — metering data, SCADA data — I would actually argue AI is far more accurate because it is data driven on actual measurements, rather than hypothetical planning analysis based on scenarios.”
For more critical real-time applications, ThinkLabs deploys what Wong called “hybrid models” that blend AI computation with traditional physics-based simulation. In the most stringent use cases, the AI handles roughly 99% of the computational workload before handing off to a physics-based engine for final validation — a technique Wong described as using AI to “warm start” the simulation.
The company also monitors for model drift and maintains strict training boundaries. “We’re not like ChatGPT training the internet here,” Wong said. “We’re training on the possibility of grid conditions. And if we do see a condition where we did not train, or outside of our training boundary, we can always run on-demand training on those certain solution spaces.”
The bullish case for ThinkLabs — and for grid-focused AI more broadly — rests heavily on the assumption that electricity demand will surge dramatically over the coming decade. But some analysts have begun questioning whether those projections are inflated, particularly if AI investment cycles cool and data center build-outs decelerate.
Wong argued that his company’s value proposition is resilient to that scenario. Even without dramatic load growth, he said, utilities face a fundamental modernization challenge. They have been using tools and processes from the 1990s and 2000s, and the workforce that knows how to operate those tools is retiring at an alarming rate.
“Workforce renewal is a big factor,” he said. “These AI tools not only modernize the tool itself, but also modernize culture and transformation and become major points of retention for the next generation.”
He also pointed to energy affordability as a driver that exists independent of load growth projections. If utilities continue to plan based on worst-case deterministic scenarios — building enough infrastructure to cover every conceivable contingency — consumer rates will become unmanageable. AI-powered probabilistic analysis, Wong argued, allows utilities to make smarter, more cost-effective decisions regardless of whether the most aggressive demand forecasts materialize.
“A large part of this AI is not only enabling workload, but how do we act with intelligence — going from worst-case to time-series analysis, from deterministic to probabilistic and stochastic analysis, and also coming up with solutions,” he said.
Wong frames the broader opportunity with an analogy that captures both the simplicity and the ambition of what ThinkLabs is attempting. For decades, he said, the utility industry’s default response to grid constraints has been the equivalent of building wider highways — more wires, more copper, more steel. ThinkLabs wants to be the navigation system that reroutes traffic instead.
“In the past, when we drive, we always drive with what we are familiar with — just the big roads,” he said. “But with AI, we can optimize the traffic patterns to drive on much more effective routes. In this case, it might be a mix of wires, flexibility, batteries, and operational decisions.”
Whether ThinkLabs can deliver on that vision at the scale the grid demands remains an open question. But Wong, who has spent two decades building and selling grid software companies, is not thinking in terms of incremental improvement. He sees a narrow window — measured in years, not decades — during which the foundational AI infrastructure for the grid will be built, and whoever builds it will shape the energy system for a generation.
“I truly believe the next two years of AI development for the grid will dictate the next decades of what can happen to the grid,” Wong said. “It’s really here now.”
The grid, in other words, is getting a copilot. The question is no longer whether utilities will trust AI with their most critical engineering decisions, but how quickly they can afford not to.
Sales of wired headphones have rised by 20% over the past year. Now Axel Grell has designed the OAE2, open back wired headphones to meet demand for wired sound.
For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics papers, yet it is now tasked with rendering the most complex, interactive, and generative interfaces humanity has ever conceived.
At the heart of this tension lies a single, invisible, and prohibitively expensive operation known as “layout reflow.” Whenever a developer needs to know the height of a paragraph or the position of a line to build a modern interface, they must ask the browser’s Document Object Model (DOM), the standard by which developers can create and modify webpages.
In response, the browser often has to recalculate the geometry of the entire page — a process akin to a city being forced to redraw its entire map every time a resident opens their front door.
Last Friday, March 27, 2026, Cheng Lou — a prominent software engineer whose work on React, ReScript, and Midjourney has defined much of the modern frontend landscape — announced on the social network X that he had “crawled through depths of hell” to release an open source (MIT License) solution: Pretext, which he coded using AI vibe coding tools and models like OpenAI’s Codex and Anthropic’s Claude.
It is a 15KB, zero-dependency TypeScript library that allows for multiline text measurement and layout entirely in “userland,” bypassing the DOM and its performance bottlenecks.
Without getting too technical, in short, Lou’s Pretext turns text blocks on the web into fully dynamic, interactive and responsive spaces, able to adapt and smoothly move around any other object on a webpage, preserving letter order and spaces between words and lines, even when a user clicks and drags other objects to intersect with the text, or resizes their browser window dramatically.
Ironically, it’s difficult with mere text alone to convey how significant Lou’s latest release is for the entire web going forward. Fortunately, other third-party developers whipped up quick demos with Pretext showing off some of its more impressive powers, including dragon that flies around within a block of text, breathing fire as the surrounding characters melt and are pushed out of the way from the dragon’s undulating form.
Another guy made an app that requires the user to keep their smartphone exactly level, horizontal to read the text — tipping the device to one side or the other causes all the letters to fall off and collect there as though they were each physical objects dumped off the surface of a flat tray. Someone even coded a web app allowing you to watch a whole movie (the new Project Hail Mary starring Ryan Gosling) while reading the book it is based on at the same time, all rendered out of interactive, moving, fast, responsive text.
While some detractors immediately pointed out that many of these flashy demos make the underlying text unreadable or illegible, they’re missing the larger point: with Pretext, one man (Lou) using AI vibe coding tools has singlehandedly revolutionized what’s possible for everyone and anyone to do when it comes to web design and interactivity. The project hasn’t even been out a week — of course the initial users are only scratching the surface of the newfound capabilities which heretofore required complex, custom instructions and could not be scaled or generalized.
Of course, designers and typographers may be the ones most immediately impressed and affected by the advance — but really, anyone who has spent time trying to lay out a block of text and wrap it around images or other embedded, interactive elements on a webpage is probably going to be interested in this. But anyone who uses the web — all 6 billion and counting of us — will likely experience some of the effects of this release before too long as it spreads to the sites we visit and use daily.
And already, some developers are working on more useful features with it, like a custom user-controlled font resizer and letter spacing optimizer for those with dyslexia:
With that in mind, perhaps it is not suprising to learn that within 48 hours, the project garnered over 14,000 GitHub stars and 19 million views on X, signaling what many believe to be a foundational shift in how we build the internet.
It also demonstrates that AI-assisted coding has moved beyond generating boilerplate to delivering fundamental architectural breakthroughs. For enterprises, this signifies a new era where high-leverage engineering teams can use AI to build bespoke, high-performance infrastructure that bypasses decades-old platform constraints, effectively decoupling product innovation from the slow cycle of industry-wide browser standardization
To understand why Pretext matters, one must understand the high cost of “measuring” things on the web. Standard browser APIs like getBoundingClientRect or offsetHeight are notorious for triggering layout thrashing.
In a modern interface—think of a masonry grid of thousands of text boxes or a responsive editorial spread—these measurements happen in the “hot path” of rendering. If the browser has to stop and calculate layout every time the user scrolls or an AI generates a new sentence, the frame rate drops, the battery drains, and the experience stutters.
Lou’s insight with Pretext was to decouple text layout from the DOM entirely. By using the browser’s Canvas font metrics engine as a “ground truth” and combining it with pure arithmetic, Pretext can predict exactly where every character, word, and line will fall without ever touching a DOM node.
The performance delta is staggering. According to project benchmarks, Pretext’s layout() function can process a batch of 500 different texts in approximately 0.09ms. Compared to traditional DOM reads, this represents a 300–600x performance increase. This speed transforms layout from a heavy, asynchronous chore into a synchronous, predictable primitive—one that can run at 120fps even on mobile devices.
The elegance of Pretext lies in its two-stage execution model, designed to maximize efficiency:
prepare(text, font): This is the one-time “heavy lifting” phase. The library normalizes whitespace, segments the text, applies language-specific glue rules, and measures segments using the canvas. This result is cached as an opaque data structure.
layout(preparedData, maxWidth, lineHeight): This is the “hot path”. It is pure arithmetic that takes the prepared data and calculates heights or line counts based on a given width.
Because layout() is just math, it can be called repeatedly during a window resize or a physics simulation without any performance penalty. It supports complex typographic needs that were previously impossible to handle efficiently in userland:
Mixed-bidirectional (bidi) text: Handling English, Arabic, and Korean in the same sentence without breaking layout.
Grapheme-aware breaking: Ensuring that emojis or complex character clusters are not split across lines.
Whitespace control: Preserving tabs and hard breaks for code or poetry using white-space: pre-wrap logic.
The technical challenge of Pretext wasn’t just writing the math; it was ensuring that the math matched the “ground truth” of how various browsers (Chrome, Safari, Firefox) actually render text. Text rendering is notoriously riddled with quirks, from how different engines handle kerning to the specifics of line-breaking heuristics.
Lou revealed that the library was built using an “AI-friendly iteration method”. By iteratively prompting models like Claude and Codex to reconcile TypeScript layout logic against actual browser rendering on massive corpora—including the full text of The Great Gatsby and diverse multilingual datasets—he was able to achieve pixel-perfect accuracy without the need for heavy WebAssembly (WASM) binaries or font-parsing libraries.
The release of Pretext immediately manifested as a series of radical experiments across X and the broader developer community. The original demos showcased by Lou on X provided a glimpse into a new world:
The editorial engine: A multi-column magazine layout where text flows around draggable orbs, reflowing in real-time at 60fps.
Masonry virtualization: A demo displaying hundreds of thousands of variable-height text boxes. Height prediction is reduced to a linear traversal of cached heights.
Shrinkwrapped bubbles: Chat bubbles that calculate the tightest possible width for multiline text, eliminating wasted area.
The community response was equally explosive. Within 72 hours, developers began pushing the boundaries:
@@yiningkarlli implemented the Knuth-Plass paragraph justification algorithm, bringing high-end print typography—reducing “rivers” of white space by evaluating entire paragraphs as units—to the web.
@Talsiach built “X Times,” an AI-powered newspaper that uses Grok to analyze images and X posts, using Pretext to instantly layout a front-page reflow.
@Kaygeeartworks demonstrated a Three.js fluid simulation featuring fish swimming through and around text elements, with the text reacting to physics at high frame rates.
@KageNoCoder launched Pretext-Flow, a live playground for flowing text around custom media like transparent PNGs or videos.
@cocktailpeanut and @stevibe demonstrated ASCII art Snake and Hooke’s Law physics with live text reflow.
@kho built a BioMap visualization with 52 biomarker blocks performing layout reflow at 0.04ms every frame.
The response to Pretext was overwhelmingly enthusiastic from frontend luminaries. Guillermo Rauch, CEO of Vercel, and Ryan Florence of Remix praised the library’s performance gains. Tay Zonday noted the potential for neurodiverse high-speed reading through dynamic text rasterization.
However, the release also ignited a nuanced debate about the future of web standards. Critics warned of “thick client” overreach, arguing that bypassing the DOM moves us away from the simplicity of hypermedia systems. Lou’s response was a meditation on the lineage of computing. He pointed to the evolution of iOS—which started with PostScript, a static format for printers, and evolved into a polished, scriptable platform. The web, Lou argues, has remained stuck in a “document format” mindset, layering scripting on top of a static core until complexity reached a point of diminishing returns. Pretext is an attempt to restart that conversation, treating layout as an interpreter—a set of functions that developers can manipulate—rather than a black-box data format managed by the browser.
Pretext is released under the MIT License, ensuring it remains a public utility for the developer community and commercial enterprises alike. It is not merely a library for making chat bubbles look better; it is an infrastructure-level tool that decouples the visual presentation of information from the architectural constraints of the 1990s web.
By solving the last and biggest bottleneck of text measurement, Lou has provided a path for the web to finally compete with native platforms in terms of fluidity and expressiveness. Whether it is used for high-end editorial design, 120fps virtualized feeds, or generative AI interfaces, Pretext marks the moment when text on the web stopped being a static document and became a truly programmable medium.
Organizations should adopt Pretext immediately if they are building “Generative UI” or high-frequency data dashboards, but they should do so with a clear understanding of the “thick client” trade-off.
Why adopt: The move from O(N) to O(\log N) or O(1) layout performance is not an incremental update; it is an architectural unlock. If your product involves a chat interface that stutters during long responses or a masonry grid that “jumps” as it calculates heights, Pretext is the solution. It allows you to build interfaces that feel as fast as the underlying models are becoming.
What to be aware of: Adoption requires a specialized talent pool. This isn’t “just CSS” anymore; it’s typography-aware engineering. Organizations must also be aware that by moving layout into userland, they become the “stewards” of accessibility and standard behavior that the browser used to handle for free.
Ultimately, Pretext is the first major step toward a web that feels more like a game engine and less like a static document. Organizations that embrace this “interpreter” model of layout will be the ones that define the visual language of the AI era.
Another iPhone update has just reached its first developer beta. There was a chance it would include the first glimpse of the brand-new Siri, but so far there’s no sign.
AI agents aren’t just browsing the web anymore — they’re transacting on it. That shifts the security question from bot or human to something far harder: trusted or not
Motorola’s surprise Razr Fold is here to challenge the iPhone Fold and Samsung Galaxy Z Fold 7 with a low price and a sudden release date.
AI agents may be fast and unpredictable in their reasoning, but their actions are concrete—and that’s a problem security already knows how to solve.
Enterprise AI delivers value in the physical world; geospatial AI adds the missing “where,” improving decisions on risk, operations, resilience, and growth.
Static WAF rules can’t keep pace with modern web traffic. Here’s why most security teams have stopped managing them—and what some vendors are doing differently.
Data centers are expanding fast, driving rising energy and water use. Sustainable design will determine whether this growth becomes a net positive.