MLB Players Association teams up with Genies to launch AI-powered player avatars, letting fans hold ongoing one-to- one conversations with Major League Baseball players.
Dan D’Agostino Master Audio Systems has announced the launch of the Momentum Z Monoblock Amplifier at $125,000 per pair amd with an an output of 500W per channel.
OpenAI on Wednesday released GPT-5.3-Codex, which the company calls its most capable coding agent to date, in an announcement timed to land at the exact same moment Anthropic unveiled its own flagship model upgrade, Claude Opus 4.6. The synchronized launches mark the opening salvo in what industry observers are calling the AI coding wars — a high-stakes battle to capture the enterprise software development market.
The dueling announcements came amid an already heated week between the two AI giants, who are also set to air competing Super Bowl advertisements on Sunday, and whose executives have been trading barbs publicly over business models, access, and corporate ethics.
“I love building with this model; it feels like more of a step forward than the benchmarks suggest,” OpenAI CEO Sam Altman wrote on X minutes after the launch. He later added: “It was amazing to watch how much faster we were able to ship 5.3-Codex by using 5.3-Codex, and for sure this is a sign of things to come.”
That claim — that the model helped build itself — is a significant milestone in AI development. According to OpenAI’s announcement, the Codex team used early versions of GPT-5.3-Codex to debug its own training runs, manage deployment infrastructure, and diagnose test results and evaluations. The company describes it as “our first model that was instrumental in creating itself.”
The new model posts substantial gains across multiple industry benchmarks. GPT-5.3-Codex achieves 57% on SWE-Bench Pro, a rigorous evaluation of real-world software engineering that spans four programming languages and tests contamination-resistant, industrially relevant challenges. It scores 77.3% on Terminal-Bench 2.0, which measures the terminal skills essential for coding agents, and 64% on OSWorld, an agentic computer-use benchmark where models must complete productivity tasks in visual desktop environments.
The Terminal-Bench 2.0 result is particularly striking. According to performance data released Wednesday, GPT-5.3-Codex scored 77.3% compared to GPT-5.2-Codex’s 64.0% and the base GPT-5.2 model’s 62.2% — a 13-percentage-point leap in a single generation. One user on X noted that the score “absolutely demolished” Anthropic’s Opus 4.6, which reportedly achieved 65.4% on the same benchmark.
OpenAI also claims the model accomplishes these results with dramatically improved efficiency: less than half the tokens of its predecessor for equivalent tasks, plus more than 25% faster inference per token.
“Notably, GPT-5.3-Codex does so with fewer tokens than any prior model, letting users simply build more,” the company stated in its announcement.
Perhaps more significant than the benchmark improvements is OpenAI’s positioning of GPT-5.3-Codex as a model that transcends pure coding. The company explicitly states that “Codex goes from an agent that can write and review code to an agent that can do nearly anything developers and professionals can do on a computer.”
This expanded capability set includes debugging, deploying, monitoring, writing product requirement documents, editing copy, conducting user research, building slide decks, and analyzing data in spreadsheet applications. The model shows strong performance on GDPVal, an OpenAI evaluation released in 2025 that measures performance on well-specified knowledge-work tasks across 44 occupations.
The expansion signals OpenAI’s ambition to capture not just the developer tools market but the broader enterprise productivity software space — a market that includes established players like Microsoft, Salesforce, and ServiceNow, all of whom are racing to embed AI agents into their platforms.
The pivot toward general-purpose computing brings new security considerations. In a notable disclosure, OpenAI revealed that GPT-5.3-Codex is the first model it classifies as “High capability” for cybersecurity-related tasks under its Preparedness Framework, and the first directly trained to identify software vulnerabilities.
“While we don’t have definitive evidence it can automate cyber attacks end-to-end, we’re taking a precautionary approach and deploying our most comprehensive cybersecurity safety stack to date,” the company stated. Mitigations include dual-use safety training, automated monitoring, trusted access for advanced capabilities, and enforcement pipelines incorporating threat intelligence.
Altman highlighted this development on X: “This is our first model that hits ‘high’ for cybersecurity on our preparedness framework. We are piloting a Trusted Access framework, and committing $10 million in API credits to accelerate cyber defense.”
The company is also expanding the private beta of Aardvark, its security research agent, and partnering with open-source maintainers to provide free codebase scanning for widely used projects. OpenAI cited Next.js as an example where a security researcher used Codex to discover vulnerabilities disclosed last week.
The cybersecurity announcement, however, has been overshadowed by the increasingly personal nature of the OpenAI-Anthropic rivalry. The timing of Wednesday’s release cannot be understood without the context of OpenAI’s intensifying competition with Anthropic, the AI safety-focused startup founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
Both companies scheduled major product announcements for 10 a.m. Pacific Time today. Anthropic unveiled Claude Opus 4.6, which it describes as its “smartest model” that “plans more carefully, sustains agentic tasks for longer, operates reliably in massive codebases, and catches its own mistakes.”
The head-to-head timing follows a week of escalating tensions. Anthropic announced it will air Super Bowl advertisements mocking OpenAI’s recent decision to begin testing ads within ChatGPT for free users.
Altman responded with unusual directness, calling the advertisements “funny” but “clearly dishonest” in an extensive X post.
“We would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that,” Altman wrote. “I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.”
He went further, characterizing Anthropic as an “authoritarian company” that “wants to control what people do with AI.”
“Anthropic serves an expensive product to rich people,” Altman wrote. “More Texans use ChatGPT for free than total people use Claude in the US, so we have a differently-shaped problem than they do.”
The public sparring masks a deadly serious business competition. The rivalry plays out against a backdrop of explosive enterprise AI adoption, where both companies are fighting for position in a rapidly expanding market.
According to survey data from Andreessen Horowitz released this week, enterprise spending on large language models has dramatically outpaced even bullish projections. Average enterprise LLM spending reached $7 million in 2025, 180% higher than 2024’s actual spending of $2.5 million — and 56% above what enterprises had projected for 2025 just a year earlier. Spending is projected to reach $11.6 million per enterprise in 2026, a further 65% increase.
The a16z data reveals shifting market dynamics that help explain the intensity of the competition. OpenAI maintains the largest average share of enterprise AI wallet, but that share is shrinking — from 62% in 2024 to a projected 53% in 2026. Anthropic’s share, meanwhile, has grown from 14% to a projected 18% over the same period, with Google showing similar gains.
Enterprise adoption patterns tell a more nuanced story. While OpenAI leads in overall usage, only 46% of surveyed OpenAI customers are using its most capable models in production, compared to 75% for Anthropic and 76% for Google. When including testing environments, 89% of Anthropic customers are testing or using the company’s most capable models — the highest rate among major providers.
For software development specifically — one of the primary use cases for both companies’ coding agents — the a16z survey shows OpenAI with approximately 35% market share, with Anthropic claiming a substantial and growing portion of the remainder.
These market dynamics explain why both companies are positioning themselves as platforms rather than mere model providers. OpenAI on Wednesday also launched Frontier, a new platform designed to serve as a comprehensive hub for businesses adopting a range of AI tools — including those developed by third parties — that can operate together seamlessly.
“We can be the partner of choice for AI transformation for enterprise. The sky is the limit in terms of revenue we can generate from a platform like that,” Fidji Simo, OpenAI’s CEO of applications, told reporters this week.
This follows Monday’s launch of the Codex desktop application for macOS, which OpenAI says has already surpassed 500,000 downloads. The app enables users to manage multiple AI coding agents simultaneously — a capability that becomes increasingly important as enterprises deploy agents for complex, long-running tasks.
The platform ambitions require extraordinary capital. The dueling launches underscore the staggering financial requirements of frontier AI development, with both companies burning through billions while racing to establish market dominance.
Anthropic is currently in discussions for a funding round that could bring in more than $20 billion at a valuation of at least $350 billion, according to Bloomberg, and is simultaneously planning an employee tender offer at that valuation.
OpenAI, meanwhile, has disclosed that it owes more than $1 trillion in financial obligations to backers — including Oracle, Microsoft, and Nvidia — that are essentially fronting compute costs in expectation of future returns.
GPT-5.3-Codex was “co-designed for, trained with, and served on NVIDIA GB200 NVL72 systems,” according to OpenAI’s announcement—a reference to Nvidia’s latest Blackwell-generation AI supercomputing architecture.
The financial pressure adds urgency to both companies’ enterprise strategies. Unlike established tech giants with diversified revenue streams, both Anthropic and OpenAI must prove they can generate sufficient revenue from AI products to justify their extraordinary valuations and infrastructure costs.
Looking ahead, OpenAI says GPT-5.3-Codex is available immediately for paid ChatGPT users across all Codex surfaces: the desktop app, command-line interface, IDE extensions, and web interface. API access is expected to follow.
The model includes a new interactivity feature: users can choose between “pragmatic” or “friendly” personalities — a customization Altman suggests users feel strongly about. More substantively, the model provides frequent progress updates during tasks, allowing users to interact in real time, ask questions, discuss approaches, and steer toward solutions without losing context.
“Instead of waiting for a final output, you can interact in real time,” OpenAI stated. “GPT-5.3-Codex talks through what it’s doing, responds to feedback, and keeps you in the loop from start to finish.”
The company promises more capabilities in the coming weeks, with Altman declaring: “I believe Codex is going to win.”
He concluded his response to Anthropic with a philosophical statement that frames the competition in stark terms: “This time belongs to the builders, not the people who want to control them.”
Whether that message resonates with enterprise customers — who according to a16z data cite trust, security, and compliance as their top concerns — remains to be seen. What’s clear is that the AI coding wars have begun in earnest, and neither company intends to cede ground.
Anthropic on Thursday released Claude Opus 4.6, a major upgrade to its flagship artificial intelligence model that the company says plans more carefully, sustains longer autonomous workflows, and outperforms competitors including OpenAI’s GPT-5.2 on key enterprise benchmarks — a release that arrives at a tumultuous moment for the AI industry and global software markets.
The launch comes just three days after OpenAI released its own Codex desktop application in a direct challenge to Anthropic’s Claude Code momentum, and amid a $285 billion rout in software and services stocks that investors attribute partly to fears that Anthropic’s AI tools could disrupt established enterprise software businesses.
For the first time, Anthropic’s Opus-class models will feature a 1 million token context window, allowing the AI to process and reason across vastly more information than previous versions. The company also introduced “agent teams” in Claude Code — a research preview feature that enables multiple AI agents to work simultaneously on different aspects of a coding project, coordinating autonomously.
“We’re focused on building the most capable, reliable, and safe AI systems,” an Anthropic spokesperson told VentureBeat about the announcements. “Opus 4.6 is even better at planning, helping solve the most complex coding tasks. And the new agent teams feature means users can split work across multiple agents — one on the frontend, one on the API, one on the migration — each owning its piece and coordinating directly with the others.”
The release intensifies an already fierce competition between Anthropic and OpenAI, the two most valuable privately held AI companies in the world. OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.
AI coding assistants have exploded in popularity over the last year, and OpenAI said more than 1 million developers have used Codex in the past month. The new Codex app is part of OpenAI’s ongoing effort to lure users and market share away from rivals like Anthropic and Cursor.
The timing of Anthropic’s release — just 72 hours after OpenAI’s Codex launch — underscores the breakneck pace of competition in AI development tools. OpenAI faces intensifying competition from Anthropic, which posted the largest share increase of any frontier lab since May 2025, according to a recent Andreessen Horowitz survey. Forty-four percent of enterprises now use Anthropic in production, driven by rapid capability gains in software development since late 2024. The desktop launch is a strategic counter to Claude Code’s momentum.
According to Anthropic’s announcement, Opus 4.6 achieves the highest score on Terminal-Bench 2.0, an agentic coding evaluation, and leads all other frontier models on Humanity’s Last Exam, a complex multi-discipline reasoning test. On GDPval-AA — a benchmark measuring performance on economically valuable knowledge work tasks in finance, legal and other domains — Opus 4.6 outperforms OpenAI’s GPT-5.2 by approximately 144 ELO points, which translates to obtaining a higher score approximately 70% of the time.
The stakes are substantial. Asked about Claude Code’s financial performance, the Anthropic spokesperson noted that in November, the company announced that Claude Code reached $1 billion in run rate revenue only six months after becoming generally available in May 2025.
The spokesperson highlighted major enterprise deployments: “Claude Code is used by Uber across teams like software engineering, data science, finance, and trust and safety; wall-to-wall deployment across Salesforce’s global engineering org; tens of thousands of devs at Accenture; and companies across industries like Spotify, Rakuten, Snowflake, Novo Nordisk, and Ramp.”
That enterprise traction has translated into skyrocketing valuations. Earlier this month, Anthropic signed a term sheet for a $10 billion funding round at a $350 billion valuation. Bloomberg reported that Anthropic is simultaneously working on a tender offer that would allow employees to sell shares at that valuation, offering liquidity to staffers who have watched the company’s worth multiply since its 2021 founding.
One of Opus 4.6’s most significant technical improvements addresses what the AI industry calls “context rot“—the degradation of model performance as conversations grow longer. Anthropic says Opus 4.6 scores 76% on MRCR v2, a needle-in-a-haystack benchmark testing a model’s ability to retrieve information hidden in vast amounts of text, compared to just 18.5% for Sonnet 4.5.
“This is a qualitative shift in how much context a model can actually use while maintaining peak performance,” the company said in its announcement.
The model also supports outputs of up to 128,000 tokens — enough to complete substantial coding tasks or documents without breaking them into multiple requests.
For developers, Anthropic is introducing several new API features alongside the model: adaptive thinking, which allows Claude to decide when deeper reasoning would be helpful rather than requiring a binary on-off choice; four effort levels (low, medium, high, max) to control intelligence, speed and cost tradeoffs; and context compaction, a beta feature that automatically summarizes older context to enable longer-running tasks.
Anthropic, which has built its brand around AI safety research, emphasized that Opus 4.6 maintains alignment with its predecessors despite its enhanced capabilities. On the company’s automated behavior audit measuring misaligned behaviors such as deception, sycophancy, and cooperation with misuse, Opus 4.6 “showed a low rate” of problematic responses while also achieving “the lowest rate of over-refusals — where the model fails to answer benign queries — of any recent Claude model.”
When asked how Anthropic thinks about safety guardrails as Claude becomes more agentic, particularly with multiple agents coordinating autonomously, the spokesperson pointed to the company’s published framework: “Agents have tremendous potential for positive impacts in work but it’s important that agents continue to be safe, reliable, and trustworthy. We outlined our framework for developing safe and trustworthy agents last year which shares core principles developers should consider when building agents.”
The company said it has developed six new cybersecurity probes to detect potentially harmful uses of the model’s enhanced capabilities, and is using Opus 4.6 to help find and patch vulnerabilities in open-source software as part of defensive cybersecurity efforts.
The rivalry between Anthropic and OpenAI has spilled into consumer marketing in dramatic fashion. Both companies will feature prominently during Sunday’s Super Bowl. Anthropic is airing commercials that mock OpenAI’s decision to begin testing advertisements in ChatGPT, with the tagline: “Ads are coming to AI. But not to Claude.”
OpenAI CEO Sam Altman responded by calling the ads “funny” but “clearly dishonest,” posting on X that his company would “obviously never run ads in the way Anthropic depicts them” and that “Anthropic wants to control what people do with AI” while serving “an expensive product to rich people.”
The exchange highlights a fundamental strategic divergence: OpenAI has moved to monetize its massive free user base through advertising, while Anthropic has focused almost exclusively on enterprise sales and premium subscriptions.
The launch occurs against a backdrop of historic market volatility in software stocks. A new AI automation tool from Anthropic PBC sparked a $285 billion rout in stocks across the software, financial services and asset management sectors on Tuesday as investors raced to dump shares with even the slightest exposure. A Goldman Sachs basket of US software stocks sank 6%, its biggest one-day decline since April’s tariff-fueled selloff.
The selloff was triggered by a new legal tool from Anthropic, which showed the AI industry’s growing push into industries that can unlock lucrative enterprise revenue needed to fund massive investments in the technology. One trigger for Tuesday’s selloff was Anthropic’s launch of plug-ins for its Claude Cowork agent on Friday, enabling automated tasks across legal, sales, marketing and data analysis.
Thomson Reuters plunged 15.83% Tuesday, its biggest single-day drop on record; and Legalzoom.com sank 19.68%. European legal software providers including RELX, owner of LexisNexis, and Wolters Kluwer experienced their worst single-day performances in decades.
Not everyone agrees the selloff is warranted. Nvidia CEO Jensen Huang said on Tuesday that fears AI would replace software and related tools were “illogical” and “time will prove itself.” Mark Murphy, head of U.S. enterprise software research at JPMorgan, said in a Reuters report it “feels like an illogical leap” to say a new plug-in from an LLM would “replace every layer of mission-critical enterprise software.”
Among the more notable product announcements: Anthropic is releasing Claude in PowerPoint in research preview, allowing users to create presentations using the same AI capabilities that power Claude’s document and spreadsheet work. The integration puts Claude directly inside a core Microsoft product — an unusual arrangement given Microsoft’s 27% stake in OpenAI.
The Anthropic spokesperson framed the move pragmatically in an interview with VentureBeat: “Microsoft has an official add-in marketplace for Office products with multiple add-ins available to help people with slide creation and iteration. Any developer can build a plugin for Excel or PowerPoint. We’re participating in that ecosystem to bring Claude into PowerPoint. This is about participating in the ecosystem and giving users the ability to work with the tools that they want, in the programs they want.”
Data from a16z’s recent enterprise AI survey suggests both Anthropic and OpenAI face an increasingly competitive landscape. While OpenAI remains the most widely used AI provider in the enterprise, with approximately 77% of surveyed companies using it in production in January 2026, Anthropic’s adoption is rising rapidly — from near-zero in March 2024 to approximately 40% using it in production by January 2026.
The survey data also shows that 75% of Anthropic’s enterprise customers are using it in production, with 89% either testing or in production — figures that slightly exceed OpenAI’s 46% in production and 73% testing or in production rates among its customer base.
Enterprise spending on AI continues to accelerate. Average enterprise LLM spend reached $7 million in 2025, up 180% from $2.5 million in 2024, with projections suggesting $11.6 million in 2026 — a 65% increase year-over-year.
Opus 4.6 is available immediately on claude.ai, the Claude API, and major cloud platforms. Developers can access it via claude-opus-4-6 through the API. Pricing remains unchanged at $5 per million input tokens and $25 per million output tokens, with premium pricing of $10/$37.50 for prompts exceeding 200,000 tokens using the 1 million token context window.
For users who find Opus 4.6 “overthinking” simpler tasks — a characteristic Anthropic acknowledges can add cost and latency — the company recommends adjusting the effort parameter from its default high setting to medium.
The recommendation captures something essential about where the AI industry now stands. These models have grown so capable that their creators must now teach customers how to make them think less. Whether that represents a breakthrough or a warning sign depends entirely on which side of the disruption you’re standing on — and whether you remembered to sell your software stocks before Tuesday.
Cybersecurity deserves better than blind trust. 909Select offers vetted freelance talent with zero-trust hiring built in—finally, a gig platform you can trust.
The deep learning revolution has a curious blind spot: the spreadsheet. While Large Language Models (LLMs) have mastered the nuances of human prose and image generators have conquered the digital canvas, the structured, relational data that underpins the global economy — the rows and columns of ERP systems, CRMs, and financial ledgers — has so far been treated as just another file format similar to text or PDFs.
That’s left enterprises to forecast business outcomes using the typical bespoke, labor-intensive data science process of manual feature engineering and classic machine learning algorithms that predate modern deep learning.
But now Fundamental, a San Francisco-based AI firm co-founded by DeepMind alumni, is launching today with $255 million in total funding to bridge this gap.
Emerging from stealth, the company is debuting NEXUS, a Large Tabular Model (LTM) designed to treat business data not as a simple sequence of words, but as a complex web of non-linear relationships.
Most current AI models are built on sequential logic — predicting the next word in a sentence or the next pixel in a frame.
However, enterprise data is inherently non-sequential. A customer’s churn risk isn’t just a timeline; it’s a multi-dimensional intersection of transaction frequency, support ticket sentiment, and regional economic shifts. Existing LLMs struggle with this because they are poorly suited to the size and dimensionality constraints of enterprise-scale tables.
“The most valuable data in the world lives in tables and until now there has been no good foundation model built specifically to understand it,” said Jeremy Fraenkel, CEO and Co-founder of Fundamental.
In a recent interview with VentureBeat, Fraenkel emphasized that while the AI world is obsessed with text, audio, and video, tables remain the largest modality for enterprises. “LLMs really cannot handle this type of data very well,” he explained, “and enterprises currently rely on very old-school machine learning algorithms in order to make predictions.”
NEXUS was trained on billions of real-world tabular datasets using Amazon SageMaker HyperPod. Unlike traditional XGBoost or Random Forest models, which require data scientists to manually define features — the specific variables the model should look at — NEXUS is designed to ingest raw tables directly.
It identifies latent patterns across columns and rows that human analysts might miss, effectively reading the hidden language of the grid to understand non-linear interactions.
A primary reason traditional LLMs fail at tabular data is how they process numbers. Fraenkel explains that LLMs tokenize numbers the same way they tokenize words, breaking them into smaller chunks. “The problem is they apply the same thing to numbers. Tables are, by and large, all numerical,” Fraenkel noted. “If you have a number like 2.3, the ‘2’, the ‘.’, and the ‘3’ are seen as three different tokens. That essentially means you lose the understanding of the distribution of numbers. It’s not like a calculator; you don’t always get the right answer because the model doesn’t understand the concept of numbers natively.”
Furthermore, tabular data is order-invariant in a way that language is not. Fraenkel uses a healthcare example to illustrate: “If I give you a table with hundreds of thousands of patients and ask you to predict which of them has diabetes, it shouldn’t matter if the first column is height and the second is weight, or vice versa.”
While LLMs are highly sensitive to the order of words in a prompt, NEXUS is architected to understand that shifting column positions should not impact the underlying prediction.
Recent high-profile integrations, such as Anthropic’s Claude appearing directly within Microsoft Excel, have suggested that LLMs are already solving tables.
However, Fraenkel distinguishes Fundamental’s work as operating at a fundamentally different layer: the predictive layer. “What they are doing is essentially at the formula layer—formulas are text, they are like code,” he said. “We aren’t trying to allow you to build a financial model in Excel. We are helping you make a forecast.”
NEXUS is designed for split-second decisions where a human isn’t in the loop, such as a credit card provider determining if a transaction is fraudulent the moment you swipe.
While tools like Claude can summarize a spreadsheet, NEXUS is built to predict the next row—whether that is an equipment failure in a factory or the probability of a patient being readmitted to a hospital.
The core value proposition of Fundamental is the radical reduction of time-to-insight. Traditionally, building a predictive model could take months of manual labor.
“You have to hire an army of data scientists to build all of those data pipelines to process and clean the data,” Fraenkel explained. “If there are missing values or inconsistent data, your model won’t work. You have to build those pipelines for every single use case.”
Fundamental claims NEXUS replaces this entire manual process with just one line of code. Because the model has been pre-trained on a billion tables, it doesn’t require the same level of task-specific training or feature engineering that traditional algorithms do.
As Fundamental moves from its stealth phase into the broader market, it does so with a commercial structure designed to bypass the traditional friction of enterprise software adoption.
The company has already secured several seven-figure contracts with Fortune 100 organizations, a feat facilitated by a strategic go-to-market architecture where Amazon Web Services (AWS) serves as the seller of record on the AWS Marketplace.
This allows enterprise leaders to procure and deploy NEXUS using existing AWS credits, effectively treating predictive intelligence as a standard utility alongside compute and storage. For the engineers tasked with implementation, the experience is high-impact but low-friction; NEXUS operates via a Python-based interface at a purely predictive layer rather than a conversational one.
Developers connect raw tables directly to the model and label specific target columns—such as a credit default probability or a maintenance risk score—to trigger the forecast. The model then returns regressions or classifications directly into the enterprise data stack, functioning as a silent, high-speed engine for automated decision-making rather than a chat-based assistant.
While the commercial implications of demand forecasting and price prediction are clear, Fundamental is emphasizing the societal benefit of predictive intelligence.
The company highlights key areas where NEXUS can prevent catastrophic outcomes by identifying signals hidden in structured data.
By analyzing sensor data and maintenance records, NEXUS can predict failures like pipe corrosion. The company points to the Flint water crisis — which cost over $1 billion in repairs — as an example where predictive monitoring could have prevented life-threatening contamination.
Similarly, during the COVID-19 crisis, PPE shortages cost hospitals $323 billion in a single year. Fundamental argues that by using manufacturing and epidemiological data, NEXUS can predict shortages 4-6 weeks before peak demand, triggering emergency manufacturing in time to save lives.
On the climate front, NEXUS aims to provide 30-60 day flood and drought predictions, such as for the 2022 Pakistan floods which caused $30 billion in damages.
Finally, the model is being used to predict hospital readmission risks by analyzing patient demographics and social determinants. As the company puts it: “A single mother working two jobs shouldn’t end up back in the ER because we failed to predict she’d need follow-up care.”
In the enterprise world, the definition of better varies by industry. For some, it is speed; for others, it is raw accuracy.
“In terms of latency, it depends on the use case,” Fraenkel explains. “If you are a researcher trying to understand what drugs to administer to a patient in Africa, latency doesn’t matter as much. You are trying to make a more accurate decision that can end up saving the most lives possible.”
In contrast, for a bank or hedge fund, even a marginal increase in accuracy translates to massive value.
“Increasing the prediction accuracy by half a percent is worth billions of dollars for a bank,” Fraenkel says. “For different use cases, the magnitude of the percentage increase changes, but we can get you to a better performance than what you have currently.”
The $225 million Series A, led by Oak HC/FT with participation from Salesforce Ventures, Valor Equity Partners, and Battery Ventures, signals high-conviction belief that tabular data is the next great frontier.
Notable angel investors including leaders from Perplexity, Wiz, Brex, and Datadog further validate the company’s pedigree.
Annie Lamont, Co-Founder and Managing Partner at Oak HC/FT, articulated the sentiment: “The significance of Fundamental’s model is hard to overstate—structured, relational data has yet to see the benefits of the deep learning revolution.”
Fundamental is positioning itself not just as another AI tool, but as a new category of enterprise AI. With a team of approximately 35 based in San Francisco, the company is moving away from the bespoke model era and toward a foundation model era for tables.
“Those traditional algorithms have been the same for the last 10 years; they are not improving,” Fraenkel said. “Our models keep improving. We are doing the same thing for tables that ChatGPT did for text.”
Through a strategic partnership with Amazon Web Services (AWS), NEXUS is integrated directly into the AWS dashboard. AWS customers can deploy the model using their existing credits and infrastructure. Fraenkel describes this as a “very unique agreement,” noting Fundamental is one of only two AI companies to have established such a deep, multi-layered partnership with Amazon.
One of the most significant hurdles for enterprise AI is data privacy. Companies are often unwilling to move sensitive data to a third-party infrastructure.
To solve this, Fundamental and Amazon achieved a massive engineering feat: the ability to deploy fully encrypted models—both the architecture and the weights—directly within the customer’s own environment. “Customers can be confident the data sits with them,” Fraenkel said. “We are the first, and currently only, company to have built such a solution.”
Fundamental’s emergence is an attempt to redefine the OS for business decisions. If NEXUS performs as advertised—handling financial fraud, energy prices, and supply chain disruptions with a single, generalized model—it will mark the moment where AI finally learned to read the spreadsheets that actually run the world. The Power to Predict is no longer about looking at what happened yesterday; it is about uncovering the hidden language of tables to determine what happens tomorrow.
From Lego and stickers to hats and mugs, we’ve rounded up the best NASA Artemis 2 swag you can buy for the space and rocket fan in your life, everything from soup to nuts!
…
Roli launches their AI Music Coach closed beta for Airwave owners with a public beta coming in late March.
A new Samsung Galaxy S26 Ultra delivery more disappointing battery news for the upcoming flagship.
Building on the success of the SV021, the new closed-back SV021 Pro headphones improve on the predecessor model by delivering extra detail from the new 50mm drivers.