Apple iPhone 17e: New Report Reveals Unprecedented Shift Ahead Of Launch

If you’re looking to upgrade to the latest affordable phone from Apple, a new report claims there’s not long to wait, and there’s an unexpected twist.

Forbes 250: America’s Greatest Historic Innovators

See the full list of America’s Greatest Historic Innovators and the mark they left on our history. Uncover the legends who helped shape our present and future.

Comulytic’s Note Pro AI Recorder And Note Taker Includes Free Transcriptions For Life

Comulytic claims that its Note Pro AI recorder can save users up to 300 hours of time and $238 per year by increasing productivity and lowering transcription charges.

Why Generative AI Transformation Requires A Future-Back Selling Model

Over the past year, I have become increasingly convinced that both who we sell AI-enabled transformation to and how we sell it must fundamentally change. Without that shift, enterprises will continue to bolt AI onto existing processes and be disappoint…

z.ai’s open source GLM-5 achieves record low hallucination rate and leverages new RL ‘slime’ technique

Chinese AI startup Zhupai aka z.ai is back this week with an eye-popping new frontier large language model: GLM-5.

The latest in z.ai’s ongoing and continually impressive GLM series, it retains an open source MIT License — perfect for enterprise deployment – and, in one of several notable achievements, achieves a record-low hallucination rate on the independent Artificial Analysis Intelligence Index v4.0.

With a score of -1 on the AA-Omniscience Index—representing a massive 35-point improvement over its predecessor—GLM-5 now leads the entire AI industry, including U.S. competitors like Google, OpenAI and Anthropic, in knowledge reliability by knowing when to abstain rather than fabricate information.

Beyond its reasoning prowess, GLM-5 is built for high-utility knowledge work. It features native “Agent Mode” capabilities that allow it to turn raw prompts or source materials directly into professional office documents, including ready-to-use .docx, .pdf, and .xlsx files.

Whether generating detailed financial reports, high school sponsorship proposals, or complex spreadsheets, GLM-5 delivers results in real-world formats that integrate directly into enterprise workflows.

It is also disruptively priced at roughly $0.80 per million input tokens and $2.56 per million output tokens, approximately 6x cheaper than proprietary competitors like Claude Opus 4.6, making state-of-the-art agentic engineering more cost-effective than ever before. Here’s what else enterprise decision makers should know about the model and its training.

Technology: scaling for agentic efficiency

At the heart of GLM-5 is a massive leap in raw parameters. The model scales from the 355B parameters of GLM-4.5 to a staggering 744B parameters, with 40B active per token in its Mixture-of-Experts (MoE) architecture. This growth is supported by an increase in pre-training data to 28.5T tokens.

To address training inefficiencies at this magnitude, Zai developed “slime,” a novel asynchronous reinforcement learning (RL) infrastructure.

Traditional RL often suffers from “long-tail” bottlenecks; Slime breaks this lockstep by allowing trajectories to be generated independently, enabling the fine-grained iterations necessary for complex agentic behavior.

By integrating system-level optimizations like Active Partial Rollouts (APRIL), slime addresses the generation bottlenecks that typically consume over 90% of RL training time, significantly accelerating the iteration cycle for complex agentic tasks.

The framework’s design is centered on a tripartite modular system: a high-performance training module powered by Megatron-LM, a rollout module utilizing SGLang and custom routers for high-throughput data generation, and a centralized Data Buffer that manages prompt initialization and rollout storage.

By enabling adaptive verifiable environments and multi-turn compilation feedback loops, slime provides the robust, high-throughput foundation required to transition AI from simple chat interactions toward rigorous, long-horizon systems engineering.

To keep deployment manageable, GLM-5 integrates DeepSeek Sparse Attention (DSA), preserving a 200K context capacity while drastically reducing costs.

End-to-end knowledge work

Zai is framing GLM-5 as an “office” tool for the AGI era. While previous models focused on snippets, GLM-5 is built to deliver ready-to-use documents.

It can autonomously transform prompts into formatted .docx, .pdf, and .xlsx files—ranging from financial reports to sponsorship proposals.

In practice, this means the model can decompose high-level goals into actionable subtasks and perform “Agentic Engineering,” where humans define quality gates while the AI handles execution.

High performance

GLM-5’s benchmarks make it the new most powerful open source model in the world, according to Artificial Analysis, surpassing Chinese rival Moonshot’s new Kimi K2.5 released just two weeks ago, showing that Chinese AI companies are nearly caught up with far better resourced proprietary Western rivals.

According to z.ai’s own materials shared today, GLM-5 ranks near state-of-the-art on several key benchmarks:

SWE-bench Verified: GLM-5 achieved a score of 77.8, outperforming Gemini 3 Pro (76.2) and approaching Claude Opus 4.6 (80.9).

Vending Bench 2: In a simulation of running a business, GLM-5 ranked #1 among open-source models with a final balance of $4,432.12.

Beyond performance, GLM-5 is aggressively undercutting the market. Live on OpenRouter as of February 11, 2026, it is priced at approximately $0.80–$1.00 per million input tokens and $2.56–$3.20 per million output tokens. It falls in the mid-range compared to other leading LLMs, but based on its top-tier bechmarking performance, it’s what one might call a “steal.”

Model

Input (per 1M tokens)

Output (per 1M tokens)

Total Cost (1M in + 1M out)

Source

Qwen 3 Turbo

$0.05

$0.20

$0.25

Alibaba Cloud

Grok 4.1 Fast (reasoning)

$0.20

$0.50

$0.70

xAI

Grok 4.1 Fast (non-reasoning)

$0.20

$0.50

$0.70

xAI

deepseek-chat (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

deepseek-reasoner (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

Gemini 3 Flash Preview

$0.50

$3.00

$3.50

Google

Kimi-k2.5

$0.60

$3.00

$3.60

Moonshot

GLM-5

$1.00

$3.20

$4.20

Z.ai

ERNIE 5.0

$0.85

$3.40

$4.25

Qianfan

Claude Haiku 4.5

$1.00

$5.00

$6.00

Anthropic

Qwen3-Max (2026-01-23)

$1.20

$6.00

$7.20

Alibaba Cloud

Gemini 3 Pro (≤200K)

$2.00

$12.00

$14.00

Google

GPT-5.2

$1.75

$14.00

$15.75

OpenAI

Claude Sonnet 4.5

$3.00

$15.00

$18.00

Anthropic

Gemini 3 Pro (>200K)

$4.00

$18.00

$22.00

Google

Claude Opus 4.6

$5.00

$25.00

$30.00

Anthropic

GPT-5.2 Pro

$21.00

$168.00

$189.00

OpenAI

This is roughly 6x cheaper on input and nearly 10x cheaper on output than Claude Opus 4.6 ($5/$25). This release confirms rumors that Zhipu AI was behind “Pony Alpha,” a stealth model that previously crushed coding benchmarks on OpenRouter.

However, despite the high benchmarks and low cost, not all early users are enthusiastic about the model, noting its high performance doesn’t tell the whole story.

Lukas Petersson, co-founder of the safety-focused autonomous AI protocol startup Andon Labs, remarked on X: “After hours of reading GLM-5 traces: an incredibly effective model, but far less situationally aware. Achieves goals via aggressive tactics but doesn’t reason about its situation or leverage experience. This is scary. This is how you get a paperclip maximizer.”

The “paperclip maximizer” refers to a hypothetical situation described by Oxford philosopher Nick Bostrom back in 2003, in which an AI or other autonomous creation accidentally leads to an apocalyptic scenario or human extinction by following a seemingly benign instruction — like maximizing the number of paperclips produced — to an extreme degree, redirecting all resources necessary for human (or other life) or otherwise making life impossible through its commitment to fulfilling the seemingly benign objective.

Should your enterprise adopt GLM-5?

Enterprises seeking to escape vendor lock-in will find GLM-5’s MIT License and open-weights availability a significant strategic advantage. Unlike closed-source competitors that keep intelligence behind proprietary walls, GLM-5 allows organizations to host their own frontier-level intelligence.

Adoption is not without friction. The sheer scale of GLM-5—744B parameters—requires a massive hardware floor that may be out of reach for smaller firms without significant cloud or on-premise GPU clusters.

Security leaders must weigh the geopolitical implications of a flagship model from a China-based lab, especially in regulated industries where data residency and provenance are strictly audited.

Furthermore, the shift toward more autonomous AI agents introduces new governance risks. As models move from “chat” to “work,” they begin to operate across apps and files autonomously. Without the robust agent-specific permissions and human-in-the-loop quality gates established by enterprise data leaders, the risk of autonomous error increases exponentially.

Ultimately, GLM-5 is a “buy” for organizations that have outgrown simple copilots and are ready to build a truly autonomous office.

It is for engineers who need to refactor a legacy backend or requires a “self-healing” pipeline that doesn’t sleep.

While Western labs continue to optimize for “Thinking” and reasoning depth, Zai is optimizing for execution and scale.

Enterprises that adopt GLM-5 today are not just buying a cheaper model; they are betting on a future where the most valuable AI is the one that can finish the project without being asked twice.

Anthropic’s Claude Cowork finally lands on Windows — and it wants to automate your workday

Anthropic released its Claude Cowork AI agent software for Windows on Monday, bringing the file management and task automation tool to roughly 70 percent of the desktop computing market and intensifying a remarkable corporate realignment that has seen Microsoft embrace a direct competitor to its longtime AI partner, OpenAI.

The Windows launch arrives with what Anthropic calls “full feature parity” with the macOS version: file access, multi-step task execution, plugins, and Model Context Protocol (MCP) connectors for integrating external services. Users can now also set global and folder-specific instructions that Claude follows in every session, a feature developers on Reddit described as “a game-changer” for maintaining context across projects.

“Cowork is now available on Windows,” Anthropic announced on X. “We’re bringing full feature parity with MacOS: file access, multi-step task execution, plugins, and MCP connectors.”

The release closes a critical platform gap that had limited Cowork to Apple’s operating system since its January 12 debut. The Windows expansion underscores a broader transformation already underway in enterprise AI, with Microsoft simultaneously selling its own GitHub Copilot to customers while encouraging thousands of its own employees to adopt Anthropic’s competing tools internally.

Inside Microsoft’s surprising pivot toward its biggest AI rival

The relationship between Microsoft and Anthropic has accelerated with striking speed. In November, the two companies announced a strategic partnership allowing Microsoft Foundry customers access to Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. As part of that arrangement, Anthropic committed to purchasing $30 billion of Azure compute capacity.

But the partnership has expanded well beyond cloud hosting. According to a January 22 report in The Verge, Microsoft has begun encouraging thousands of employees from some of its most prolific teams to adopt Claude Code — and now, by extension, Cowork — even if they have no coding experience.

Microsoft’s CoreAI team, the new AI engineering group led by former Meta engineering chief Jay Parikh, has tested Claude Code in recent months, The Verge reported. The company has also approved Claude Code across all code and repositories for its Business and Industry Copilot teams.

“Software engineers at Microsoft are now expected to use both Claude Code and GitHub Copilot and give feedback comparing the two,” The Verge reported.

The company’s spending on Anthropic approaches $500 million annually, according to The Information. Microsoft has even begun counting Anthropic AI model sales toward Azure sales quotas — an unusual incentive structure that the company typically reserves for homegrown products or models from OpenAI.

A $13 billion partnership faces new questions as Microsoft hedges its bets

Microsoft’s embrace of Anthropic raises uncomfortable questions about its $13 billion investment in OpenAI, which has long served as the exclusive provider of frontier AI models for Microsoft’s products. The two companies signed their landmark partnership in 2019, with Microsoft providing Azure computing infrastructure in exchange for preferential access to OpenAI’s technology.

That relationship now appears to be evolving into something more nuanced. Microsoft has started favoring Anthropic’s Claude models inside Microsoft 365 apps and Copilot recently, deploying them in specific applications or features where Anthropic’s models have proven more capable than OpenAI’s counterparts.

On February 5, Microsoft announced that Claude Opus 4.6 — Anthropic’s most advanced model — would become available in Microsoft Foundry, the company’s enterprise AI platform. The Azure blog post framed the integration as bringing “even more capability to agents that increasingly learn from and act on business systems.”

“At Microsoft we believe that intelligence and trust are the core requirements of agentic AI at scale,” the announcement stated. “Built on Azure, Microsoft Foundry brings these capabilities together on a secure, scalable cloud foundation for enterprise AI.”

The timing and tone suggest Microsoft views Anthropic not merely as a hedging strategy but as a genuine technical leader in certain domains. Claude Opus 4.6 offers a one-million-token context window and 128,000-token maximum output — specifications that position it for complex, long-running enterprise tasks that require processing vast amounts of information.

Why a $285 billion stock selloff has the software industry questioning its future

The deepening Microsoft-Anthropic alliance takes on added significance when viewed against a backdrop of genuine alarm rippling through the software industry. Within days of the macOS launch in January, investors began repricing SaaS companies whose products overlap with Cowork’s capabilities — project management tools, writing assistants, data analysis platforms, and workflow automation software all saw sharp declines.

Bloomberg reported that Cowork triggered a $285 billion software stocks selloff. The carnage reflected growing investor conviction that AI agents capable of automating knowledge work could render entire categories of enterprise software obsolete.

The fear is not abstract. Cowork operates as a desktop agent powered by Claude Opus 4.6 that can read local files, execute multi-step tasks, and interact with external services through plugins — all running directly on a user’s machine. Unlike chatbot interfaces that respond to individual prompts, Cowork plans and executes complete workflows across files, applications, and connected services.

Anthropic has leaned into this positioning. On January 30, the company’s Anthropic Labs division released 11 open-source agentic plugins spanning sales, legal, finance, marketing, data analysis, and software development. These plugins connect Cowork to external tools, enabling the agent to pull data from CRMs, draft legal documents, analyze spreadsheets, or manage project boards without users switching applications.

The hidden risks of giving an AI agent access to your files

Such convenience comes with tradeoffs, and Anthropic has been transparent about the risks inherent in agent software that can read, write, and delete files. The company’s support documentation warns users to “be cautious about granting access to sensitive information like financial documents, credentials, or personal records” and suggests saving backups and creating dedicated folders with nonsensitive information.

Cowork remains susceptible to prompt injection attacks — hidden instructions embedded in documents or websites that can hijack AI agents and redirect their actions. The browser automation feature includes an explicit disclaimer warning that hidden code in websites may “steal your data, inject malware into your systems, or take over your system.”

“We use a virtual machine under the hood,” Boris Cherny, Anthropic’s head of Claude Code, told Wired. “This means you have to say which folders Claude has access to. And if you don’t give it access to a folder, Claude literally cannot see that folder.”

The Windows version includes additional safety constraints. According to user reports on Reddit, Cowork on Windows restricts file access to the user’s personal folder, preventing the agent from accessing common development directories like C:\git. While some users expressed frustration at this limitation, others noted it as a prudent safeguard for less technical users.

“To be fair, seeing how many people nuked themselves with Claude Code, it is much safer to limit people to reduce the collateral damage,” wrote one Reddit user.

Major corporations are already betting on Claude’s enterprise potential

Despite the security caveats, early enterprise adoption suggests meaningful interest. Customer testimonials published alongside the Claude Opus 4.6 announcement on the Microsoft Azure blog included statements from Adobe, Dentons, and other major organizations already integrating Anthropic’s technology into their workflows.

“At Adobe, we’re continuously evaluating new AI capabilities that can help us deliver more powerful, responsible, and intuitive experiences for our customers,” said Michael Marth, VP Engineering for Experience Manager and LLM Optimizer. “Foundry gives us a flexible, enterprise-ready environment to explore frontier models while maintaining the trust, governance, and scale that are critical for Adobe.”

Matej Jambrich, CTO of Dentons Europe, described deploying Claude for legal work: “Better model reasoning reduces rework and improves consistency, so our lawyers can focus on higher value judgment.”

On Reddit, an Anthropic representative wrote that the Windows release addresses “the most consistent request” since Cowork’s macOS debut — a demand that came “especially from enterprise teams.” The detail underscores the tool’s perceived value in corporate environments where Windows dominates the desktop landscape.

At $20 a month, Cowork positions itself as a premium productivity play

Access to these capabilities comes at a price. Cowork for Windows is available in research preview at claude.com/cowork for all paid Claude subscription tiers, including Pro ($20/month), Max ($100/month), Team, and Enterprise. Free-tier users cannot access the feature.

This pricing structure positions Cowork as a premium productivity tool rather than a mass-market offering — at least for now. Anthropic has not announced plans for broader availability, and the “research preview” designation suggests the company continues to gather user feedback before committing to a general release.

The January macOS launch was similarly restricted to $100/month Max subscribers before expanding to other paid tiers, suggesting Anthropic may follow a gradual rollout strategy as it refines the product. For enterprise customers evaluating the tool, the pricing represents a fraction of what many pay for traditional software licenses—a calculus that could accelerate adoption if Cowork delivers on its automation promises.

The battle for the future of work has a new front line

For Microsoft, the deepening Anthropic partnership reflects a pragmatic recognition that AI leadership may require embracing multiple frontier providers rather than relying exclusively on a single partner.

The company’s willingness to deploy Claude tools internally while selling GitHub Copilot externally suggests confidence that the enterprise market can accommodate competing approaches — or perhaps an acknowledgment that betting everything on OpenAI carries its own risks.

For the broader software industry, Cowork’s expansion to Windows extends the competitive threat to an even larger installed base. Companies whose value propositions rest on task automation, file management, or workflow orchestration now face a well-funded competitor capable of replicating their core functionality through natural language commands.

The $285 billion in market capitalization that evaporated after Cowork’s January launch may prove to be just an opening salvo. With Windows support now live, Anthropic has removed the last major platform barrier between its AI agent and the enterprise customers most likely to adopt it.

The software industry spent decades building tools to help knowledge workers manage files, automate tasks, and organize information. Now it faces a future where a single application, powered by an AI that learns and improves with every interaction, threatens to do all of that and more. The question is no longer whether AI agents will reshape enterprise software, but how much of the old world will survive the transformation.

The AI Skills Gap Is Becoming A Real Business Risk

AI is scaling faster than workforce readiness. EC-Council’s new role-based AI certifications aim to help close the growing skills and governance gap.

iOS 26.3: Critical iPhone Update Has Unprecedented Changes For Millions Of Users

The next iPhone update is here with new features for all users and key changes for those in the EU.

What Three Questions That Will Define AI In 2026?

Discover AI’s future by 2026: trends, ethical concerns, business impact, and innovation shaping the tech landscape.

Samsung Confirms Galaxy S26 Ultra Unpacked: How To Claim Your $900 Discount

The date and time of the next Galaxy Unpacked have been confirmed by Samsung with details of how to watch and early pre-order discounts.