Audeze Reveals Its New Maxwell 2 Gaming Headset For PCs And Consoles

Audeze Maxwell 2 is a major upgrade over the Maxwell gaming headset which garnered serious praise from gamers. Maxwell 2 has been re-engineered for better audio fidelity

JBL’s Tune Series Headphones Unveiled At CES 2026 With More Power And Fresh Designs

The latest Tune collection from JBL’s most popular headphone series now come with enhanced sound and upgraded performance features to suit all price points.

JBL Exhibits New Endurance And Sense Active And Open Earbud Designs At CES 2026

JBL is launching a range of new open and active earbuds for sporting use at this year’s Consumer Electronics Show at Las Vegas which runs from January 6 through 9.

Matter, Zigbee And Aliro Want To Unify Your Smart Home

CSA announces plans for a “public event designed to accelerate industry collaboration, networking, and showcase the real-world impact of interoperable IoT solutions”

The creator of Claude Code just revealed his workflow, and developers are losing their minds

When the creator of the world’s most advanced coding agent speaks, Silicon Valley doesn’t just listen — it takes notes.

For the past week, the engineering community has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development, with industry insiders calling it a watershed moment for the startup.

“If you’re not reading the Claude Code best practices straight from its creator, you’re behind as a programmer,” wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny’s “game-changing updates,” Anthropic is “on fire,” potentially facing “their ChatGPT moment.”

The excitement stems from a paradox: Cherny’s workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny’s setup, the experience “feels more like Starcraft” than traditional coding — a shift from typing syntax to commanding autonomous units.

Here is an analysis of the workflow that is reshaping how software gets built, straight from the architect himself.

How running five AI agents at once turns coding into a real-time strategy game

The most striking revelation from Cherny’s disclosure is that he does not code in a linear fashion. In the traditional “inner loop” of development, a programmer writes a function, tests it, and moves to the next. Cherny, however, acts as a fleet commander.

“I run 5 Claudes in parallel in my terminal,” Cherny wrote. “I number my tabs 1-5, and use system notifications to know when a Claude needs input.”

By utilizing iTerm2 system notifications, Cherny effectively manages five simultaneous work streams. While one agent runs a test suite, another refactors a legacy module, and a third drafts documentation. He also runs “5-10 Claudes on claude.ai” in his browser, using a “teleport” command to hand off sessions between the web and his local machine.

This validates the “do more with less” strategy articulated by Anthropic President Daniela Amodei earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of existing models can yield exponential productivity gains.

The counterintuitive case for choosing the slowest, smartest model

In a surprising move for an industry obsessed with latency, Cherny revealed that he exclusively uses Anthropic’s heaviest, slowest model: Opus 4.5.

“I use Opus 4.5 with thinking for everything,” Cherny explained. “It’s the best coding model I’ve ever used, and even though it’s bigger & slower than Sonnet, since you have to steer it less and it’s better at tool use, it is almost always faster than using a smaller model in the end.”

For enterprise technology leaders, this is a critical insight. The bottleneck in modern AI development isn’t the generation speed of the token; it is the human time spent correcting the AI’s mistakes. Cherny’s workflow suggests that paying the “compute tax” for a smarter model upfront eliminates the “correction tax” later.

One shared file turns every AI mistake into a permanent lesson

Cherny also detailed how his team solves the problem of AI amnesia. Standard large language models do not “remember” a company’s specific coding style or architectural decisions from one session to the next.

To address this, Cherny’s team maintains a single file named CLAUDE.md in their git repository. “Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time,” he wrote.

This practice transforms the codebase into a self-correcting organism. When a human developer reviews a pull request and spots an error, they don’t just fix the code; they tag the AI to update its own instructions. “Every mistake becomes a rule,” noted Aakash Gupta, a product leader analyzing the thread. The longer the team works together, the smarter the agent becomes.

Slash commands and subagents automate the most tedious parts of development

The “vanilla” workflow one observer praised is powered by rigorous automation of repetitive tasks. Cherny uses slash commands — custom shortcuts checked into the project’s repository — to handle complex operations with a single keystroke.

He highlighted a command called /commit-push-pr, which he invokes dozens of times daily. Instead of manually typing git commands, writing a commit message, and opening a pull request, the agent handles the bureaucracy of version control autonomously.

Cherny also deploys subagents — specialized AI personas — to handle specific phases of the development lifecycle. He uses a code-simplifier to clean up architecture after the main work is done and a verify-app agent to run end-to-end tests before anything ships.

Why verification loops are the real unlock for AI-generated code

If there is a single reason Claude Code has reportedly hit $1 billion in annual recurring revenue so quickly, it is likely the verification loop. The AI is not just a text generator; it is a tester.

“Claude tests every single change I land to claude.ai/code using the Claude Chrome extension,” Cherny wrote. “It opens a browser, tests the UI, and iterates until the code works and the UX feels good.”

He argues that giving the AI a way to verify its own work — whether through browser automation, running bash commands, or executing test suites — improves the quality of the final result by “2-3x.” The agent doesn’t just write code; it proves the code works.

What Cherny’s workflow signals about the future of software engineering

The reaction to Cherny’s thread suggests a pivotal shift in how developers think about their craft. For years, “AI coding” meant an autocomplete function in a text editor — a faster way to type. Cherny has demonstrated that it can now function as an operating system for labor itself.

“Read this if you’re already an engineer… and want more power,” Jeff Tang summarized on X.

The tools to multiply human output by a factor of five are already here. They require only a willingness to stop thinking of AI as an assistant and start treating it as a workforce. The programmers who make that mental leap first won’t just be more productive. They’ll be playing an entirely different game — and everyone else will still be typing.

The 5 AI Agent Mistakes That Could Cost Businesses Millions

This article explores the five biggest mistakes leaders will make with AI agents, from data and security failures to human and cultural blind spots, and how to avoid them

How Samsung Is Building AI Into The Future Of TV

As Samsung announces new products for 2026, a key exec talks about how it’s prepared for the next 20 years in TV.

Shure Launches MV88 USB-C Stereo Microphone For Content Creators On The Move

This compact USB-C microphone from Shure is designed to plug into any USB-C smartphone and captures high-quality audio up to 48kHZ and 24-bit with a minumum of fuss.

‘Intelition’ changes everything: AI is no longer a tool you invoke

AI is evolving faster than our vocabulary for describing it. We may need a few new words. We have “cognition” for how a single mind thinks, but we don’t have a word for what happens when human and machine intelligence work together to perceive, decide, create and act. Let’s call that process intelition

Intelition isn’t a feature; it’s the organizing principle for the next wave of software where humans and AI operate inside the same shared model of the enterprise. Today’s systems treat AI models as things you invoke from the outside. You act as a “user,” prompting for responses or wiring a “human in the loop” step into agentic workflows. But that’s evolving into continuous co-production: People and agents are shaping decisions, logic and actions together, in real time.

Read on for a breakdown of the three forces driving this new paradigm.

A unified ontology is just the beginning

In a recent shareholder letter, Palantir CEO Alex Karp wrote that “all the value in the market is going to go to chips and what we call ontology,” and argued that this shift is “only the beginning of something much larger and more significant.” By ontology, Karp means a shared model of objects (customers, policies, assets, events) and their relationships. This also includes what Palantir calls an ontology’s “kinetic layer” that defines the actions and security permissions connecting objects.

In the SaaS era, every enterprise application creates its own object and process models. Combined with a host of legacy systems and often chaotic models, enterprises face the challenge of stitching all this together. It’s a big and difficult job, with redundancies, incomplete structures and missing data. The reality: No matter how many data warehouse or data lake projects commissioned, few enterprises come close to creating a consolidated enterprise ontology. 

A unified ontology is essential for today’s agentic AI tools. As organizations link and federate ontologies, a new software paradigm emerges: Agentic AI can reason and act across suppliers, regulators, customers and operations, not just within a single app.  

As Karp describes it, the aim is “to tether the power of artificial intelligence to objects and relationships in the real world.”

World models and continuous learning

Today’s models can hold extensive context, but holding information isn’t the same as learning from it. Continual learning requires the accumulation of understanding, rather than resets with each retraining.

To his aim, Google recently announced “Nested Learning” as a potential solution, grounded direclty into existing LLM architecture and training data. The authors don’t claim to have solved the challenges of building world models. But, Nested Learning could supply the raw ingredients for them: Durable memory with continual learning layered into the system. The endpoint would make retraining obsolete. 

In June 2022, Meta’s chief AI scientist Yann LeCun created a blueprint for “autonomous machine intelligence” that featured a hierarchical approach to using joint embeddings to make predictions using world models. He called the technique H-JEPA, and later put bluntly: “LLMs are good at manipulating language, but not at thinking.”

Over the past three years, LeCun and his colleagues at Meta have moved H-JEPA theory into practice with open source models V-JEPA and I-JEPA, which learn image and video representations of the world.

The personal intelition interface 

The third force in this agentic, ontology-driven world is the personal interface. This puts people at the center rather than as “users” on the periphery. This is not another app; it is the primary way a person participates in the next era of work and life. Rather than treating AI as something we visit through a chat window or API cal, the personal intelition interface will be always-on, aware of our context, preferences and goals and capable of acting on our behalf across the entire federated economy.

Let’s analyze how this is already coming together.

In May, Jony Ive sold his AI device company io to OpenAI to accelerate a new AI device category. He noted at the time: “If you make something new, if you innovate, there will be consequences unforeseen, and some will be wonderful, and some will be harmful. While some of the less positive consequences were unintentional, I still feel responsibility. And the manifestation of that is a determination to try and be useful.” That is, getting the personal intelligence device right means more than an attractive venture opportunity. 

Apple is looking beyond LLMs for on-device solutions that require less processing power and result in less latency when creating AI apps to understand “user intent.” Last year, they created UI-JEPA, an innovation that moves to “on-device analysis” of what the user wants. This strikes directly at the business model of today’s digital economy, where centralized profiling of “users” transforms intent and behavior data into vast revenue streams.

Tim Berners-Lee, the inventor of the World Wide Web, recently noted: “The user has been reduced to a consumable product for the advertiser … there’s still time to build machines that work for humans, and not the other way around.” Moving user intent to the device will drive interest in a secure personal data management standard, Solid, that Berners-Lee and his colleagues have been developing since 2022. The standard is ideally suited to pair with new personal AI devices. For instance, Inrupt, Inc., a company founded by Berners-Lee, recently combined Solid with Anthropic’s MCP standard for Agentic Wallets. Personal control is more than a feature of this paradigm; it is the architectural safeguard as systems gain the ability to learn and act continuously.

Ultimately, these three forces are moving and converging faster than most realize. Enterprise ontologies provide the nouns and verbs, world-model research supplies durable memory and learning and the personal interface becomes the permissioned point of control. The next software era isn’t coming. It’s already here.

Brian Mulconrey is SVP at Sureify Labs.