Web-Based Video Editor Billed As The Future Of Collaborative Video

This web-based video produdion tool is designed for team use. It can transcribe videos and automatically generate subtitles and captions in a range of different styles.

How AI Is Rewiring Filmmaking, And Why Craft Still Wins

OneDay co-founders Samir Mallal and Bouha Kazmi are pioneering a new approach to filmmaking that combines AI tools with traditional craft and human creativity.

Fei-Fei Li’s World Labs Prompts $1 Billion, Ricursive AI Chip Design Snares $335 Million, Google Joins AI Music Parade

Fei-Fei Li’s World Labs raises $1 Billion, Ricursive secures $335 Million for AI chip design, and Google adds Lyria 3 music generation to Gemini.

Galaxy S26 Ultra’s Camera Gambit: 5x Better, 3x Worse?

Leaked Galaxy S26 specs reveal a hidden camera downgrade for the Ultra and stagnant hardware for base models. Is Samsung’s AI worth the expected $100 price hike?

‘Fight Club’ 4K Blu-Ray Details Revealed—Alongside A One-Night-Only Cinema Re-Release

20th Century Studios’ 4K department has delved into its archives again – and I can’t wait to see the results.

Google Gemini 3.1 Pro first impressions: a ‘Deep Think Mini’ with adjustable reasoning on demand

For the past three months, Google’s Gemini 3 Pro has held its ground as one of the most capable frontier models available. But in the fast-moving world of AI, three months is a lifetime — and competitors have not been standing still.

Earlier today, Google released Gemini 3.1 Pro, an update that brings a key innovation to the company’s workhorse power model: three levels of adjustable thinking that effectively turn it into a lightweight version of Google’s specialized Deep Think reasoning system.

The release marks the first time Google has issued a “point one” update to a Gemini model, signaling a shift in the company’s release strategy from periodic full-version launches to more frequent incremental upgrades. More importantly for enterprise AI teams evaluating their model stack, 3.1 Pro’s new three-tier thinking system — low, medium, and high — gives developers and IT leaders a single model that can scale its reasoning effort dynamically, from quick responses for routine queries up to multi-minute deep reasoning sessions for complex problems.

The model is rolling out now in preview across the Gemini API via Google AI Studio, Gemini CLI, Google’s agentic development platform Antigravity, Vertex AI, Gemini Enterprise, Android Studio, the consumer Gemini app, and NotebookLM.

The ‘Deep Think Mini’ effect: adjustable reasoning on demand

The most consequential feature in Gemini 3.1 Pro is not a single benchmark number — it is the introduction of a three-tier thinking level system that gives users fine-grained control over how much computational effort the model invests in each response.

Gemini 3 Pro offered only two thinking modes: low and high. The new 3.1 Pro adds a medium setting (similar to the previous high) and, critically, overhauls what “high” means. When set to high, 3.1 Pro behaves as a “mini version of Gemini Deep Think” — the company’s specialized reasoning model that was updated just last week.

The implication for enterprise deployment could be significant. Rather than routing requests to different specialized models based on task complexity — a common but operationally burdensome pattern — organizations can now use a single model endpoint and adjust reasoning depth based on the task at hand. Routine document summarization can run on low thinking with fast response times, while complex analytical tasks can be elevated to high thinking for Deep Think–caliber reasoning.

Benchmark Performance: More Than Doubling Reasoning Over 3 Pro

Google’s published benchmarks tell a story of dramatic improvement, particularly in areas associated with reasoning and agentic capability.

On ARC-AGI-2, a benchmark that evaluates a model’s ability to solve novel abstract reasoning patterns, 3.1 Pro scored 77.1% — more than double the 31.1% achieved by Gemini 3 Pro and substantially ahead of Anthropic’s Sonnet 4.6 (58.3%) and Opus 4.6 (68.8%). This result also eclipses OpenAI’s GPT-5.2 (52.9%).

The gains extend across the board. On Humanity’s Last Exam, a rigorous academic reasoning benchmark, 3.1 Pro achieved 44.4% without tools, up from 37.5% for 3 Pro and ahead of both Claude Sonnet 4.6 (33.2%) and Opus 4.6 (40.0%). On GPQA Diamond, a scientific knowledge evaluation, 3.1 Pro reached 94.3%, outperforming all listed competitors.

Where the results become particularly relevant for enterprise AI teams is in the agentic benchmarks — the evaluations that measure how well models perform when given tools and multi-step tasks, the kind of work that increasingly defines production AI deployments.

On Terminal-Bench 2.0, which evaluates agentic terminal coding, 3.1 Pro scored 68.5% compared to 56.9% for its predecessor. On MCP Atlas, a benchmark measuring multi-step workflows using the Model Context Protocol, 3.1 Pro reached 69.2% — a 15-point improvement over 3 Pro’s 54.1% and nearly 10 points ahead of both Claude and GPT-5.2. And on BrowseComp, which tests agentic web search capability, 3.1 Pro achieved 85.9%, surging past 3 Pro’s 59.2%.

Why Google chose a ‘0.1’ release — and what it signals

The versioning decision is itself noteworthy. Previous Gemini releases followed a pattern of dated previews — multiple 2.5 previews, for instance, before reaching general availability. The choice to designate this update as 3.1 rather than another 3 Pro preview suggests Google views the improvements as substantial enough to warrant a version increment, while the “point one” framing sets expectations that this is an evolution, not a revolution.

Google’s blog post states that 3.1 Pro builds directly on lessons from the Gemini Deep Think series, incorporating techniques from both earlier and more recent versions. The benchmarks strongly suggest that reinforcement learning has played a central role in the gains, particularly on tasks like ARC-AGI-2, coding benchmarks, and agentic evaluations — exactly the domains where RL-based training environments can provide clear reward signals.

The model is being released in preview rather than as a general availability launch, with Google stating it will continue making advancements in areas such as agentic workflows before moving to full GA.

Competitive implications for your enterprise AI stack

For IT decision makers evaluating frontier model providers, Gemini 3.1 Pro’s release has to not only make them rethink which models to choose but also how to adapt to such a fast pace of change for their own products and services.

The question now is whether this release triggers a response from competitors. Gemini 3 Pro’s original launch last November set off a wave of model releases across both proprietary and open-weight ecosystems.

With 3.1 Pro reclaiming benchmark leadership in several critical categories, the pressure is on Anthropic, OpenAI, and the open-weight community to respond — and in the current AI landscape, that response is likely measured in weeks, not months.

Availability

Gemini 3.1 Pro is available now in preview through the Gemini API in Google AI Studio, Gemini CLI, Google Antigravity, and Android Studio for developers. Enterprise customers can access it through Vertex AI and Gemini Enterprise. Consumers on Google AI Pro and Ultra plans can access it through the Gemini app and NotebookLM.

The best cheap drone for beginners is now at its lowest-ever price on Amazon: Save 34% on the Potensic Atom SE

We rate the Potensic Atom SE as the best cheap drone for beginners thanks to its build quality and value for money. Now this bundle is at its lowest-ever price.

Google launches Gemini 3.1 Pro, retaking AI crown with 2X+ reasoning performance boost

Late last year, Google briefly took the crown for most powerful AI model in the world with the launch of Gemini 3 Pro — only to be surpassed within weeks by OpenAI and Anthropic releasing new models, s is common in the fiercely competitive AI race.

Now Google is back to retake the throne with an updated version of that flagship model: Gemini 3.1 Pro, positioned as a smarter baseline for tasks where a simple response is insufficient—targeting science, research, and engineering workflows that demand deep planning and synthesis.

Already, evaluations by third-party firm Artificial Analysis show that Google’s Gemini 3.1 Pro has leapt to the front of the pack and is once more the most powerful and performant AI model in the world.

A big leap in core reasoning

The most significant advancement in Gemini 3.1 Pro lies in its performance on rigorous logic benchmarks. Most notably, the model achieved a verified score of 77.1% on ARC-AGI-2.

This specific benchmark is designed to evaluate a model’s ability to solve entirely new logic patterns it has not encountered during training.

This result represents more than double the reasoning performance of the previous Gemini 3 Pro model.

Beyond abstract logic, internal benchmarks indicate that 3.1 Pro is highly competitive across specialized domains:

  • Scientific Knowledge: It scored 94.3% on GPQA Diamond.

  • Coding: It reached an Elo of 2887 on LiveCodeBench Pro and scored 80.6% on SWE-Bench Verified.

  • Multimodal Understanding: It achieved 92.6% on MMMLU.

These technical gains are not just incremental; they represent a refinement in how the model handles “thinking” tokens and long-horizon tasks, providing a more reliable foundation for developers building autonomous agents.

Improved vibe coding and 3D synthesis

Google is demonstrating the model’s utility through “intelligence applied”—shifting the focus from chat interfaces to functional outputs.

One of the most prominent features is the model’s ability to generate “vibe-coded” animated SVGs directly from text prompts. Because these are code-based rather than pixel-based, they remain scalable and maintain tiny file sizes compared to traditional video, boasting far more detailed, presentable and professional visuals for websites and presentations and other enterprise applications.

Other showcased applications include:

  • Complex System Synthesis: The model successfully configured a public telemetry stream to build a live aerospace dashboard visualizing the International Space Station’s orbit.

  • Interactive Design: In one demo, 3.1 Pro coded a complex 3D starling murmuration that users can manipulate via hand-tracking, accompanied by a generative audio score.

  • Creative Coding: The model translated the atmospheric themes of Emily Brontë’s Wuthering Heights into a functional, modern web design, demonstrating an ability to reason through tone and style rather than just literal text.

Business impact and community reactions

Enterprise partners have already begun integrating the preview version of 3.1 Pro, reporting noticeable improvements in reliability and efficiency.

Vladislav Tankov, Director of AI at JetBrains, noted a 15% quality improvement over previous versions, stating the model is “stronger, faster… and more efficient, requiring fewer output tokens”. Other industry reactions include:

  • Databricks: CTO Hanlin Tang reported that the model achieved “best-in-class results” on OfficeQA, a benchmark for grounded reasoning across tabular and unstructured data.

  • Cartwheel: Co-founder Andrew Carr highlighted the model’s “substantially improved understanding of 3D transformations,” noting it resolved long-standing rotation order bugs in 3D animation pipelines.

  • Hostinger Horizons: Head of Product Dainius Kavoliunas observed that the model understands the “vibe” behind a prompt, translating intent into style-accurate code for non-developers.

Pricing, licensing, and availability

For developers, the most striking aspect of the 3.1 Pro release is the “reasoning-to-dollar” ratio. When Gemini 3 Pro launched, it was positioned in the mid-high price range at $2.00 per million input tokens for standard prompts. Gemini 3.1 Pro maintains this exact pricing structure, effectively offering a massive performance upgrade at no additional cost to API users.

  • Input Price: $2.00 per 1M tokens for prompts up to 200k; $4.00 per 1M tokens for prompts over 200k.

  • Output Price: $12.00 per 1M tokens for prompts up to 200k; $18.00 per 1M tokens for prompts over 200k.

  • Context Caching: Billed at $0.20 to $0.40 per 1M tokens depending on prompt size, plus a storage fee of $4.50 per 1M tokens per hour.

  • Search Grounding: 5,000 prompts per month are free, followed by a charge of $14 per 1,000 search queries.

For consumers, the model is rolling out in the Gemini app and NotebookLM with higher limits for Google AI Pro and Ultra subscribers.

Licensing implications

As a proprietary model offered through Vertex Studio in Google Cloud and the Gemini API, 3.1 Pro follows a standard commercial SaaS (Software as a Service) model rather than an open-source license.

For enterprise users, this provides “grounded reasoning” within the security perimeter of Vertex AI, allowing businesses to operate on their own data with confidence.

The “Preview” status allows Google to refine the model’s safety and performance before general availability, a common practice in high-stakes AI deployment.

By doubling down on core reasoning and specialized benchmarks like ARC-AGI-2, Google is signaling that the next phase of the AI race will be won by models that can think through a problem, not just predict the next word.

Final Expands Line-Up Of Gaming Earphones By Launching Two New Models

These two new Final VR3000 gaming earphones now have a detachable cable and one model has a condenser microphone so players can choose what suits their gaming style.

Final Aims To Redefine What Its New A2000 Wired Earphones Can Achieve

The Final A2000 is the result of Final’s new sound-quality evaluation methodology, a system first developed during the creation of the company’s flagship A10000 model.