How do you use Tom’s Hardware? Take our survey and be entered to win a $250 prize

We here at Tom’s Hardware are running a short audience survey to better understand who you are, what you do, and how you use our website.

Phison demos 10X faster AI inference on consumer PCs with software and hardware combo that enables 3x larger AI models — Nvidia, AMD, MSI, and Acer systems demoed with aiDAPTIV+

Phison’s aiDAPTIV+ stack enables large Mixture of Experts AI models and agentic AI workloads to run on client systems with limited memory capacity.

Trump’s cryptic remark states Apple has invested in Intel — tells press ‘Apple went in, Nvidia went in, a lot of smart people went in’

Trump said during a short interview with the press that “Apple went in” Intel. However, it’s unclear what the president meant by this — or if he simply misspoke himself.

Chinese customs told to block H200 imports, report claims — directive would effectively ban the Nvidia AI chip from China

Chinese customs officers were allegedly told to disallow the entry of Nvidia H200 chips, effectively banning the entry of these AI processors into the country. The command comes as other sources say that Beijing will only …

SK hynix shows 16-Hi HBM4 memory for AI accelerators — 48 GB at 10 GT/s over a 2,048 interface

SK Hynix demonstrates 48 GB HBM4 memory with a 2,048-bit interface over at up to10 GT/s

Report estimates $17 billion worth of bitcoin was stolen in 2025 alone —massive haul arises from impersonation tactics and the use of AI for scams

2025 saw the largest revenue generated through crypto scams to date with an estimated $17 billion stolen from victims worldwide.

Beijing reportedly limiting H200 purchases to those with ‘special circumstances’ — sources suggest only university R&D labs can acquire Nvidia GPUs in China

After weeks of deliberation, sources suggest that China will only allow companies to purchase H200 GPUs for “special circumstances,” although Beijing has yet to define exactly what that means.

Deepseek research touts memory breakthrough, decoupling compute power and RAM pools to bypass GPU & HBM constraints — Engram conditional memory module commits static knowledge to system RAM

A new Deepseek whitepaper has outlined a new form of long-term memory for AI models, named Engram. Engram-based models are more performant than their MoE counterparts, and decouple compute power from system RAM pools to im…

Microsoft to overhaul AI data center building with community-first approach — says it will ‘be a good neighbor’ to communities, cover energy cost increases, and replenish water

Microsoft’s five-point plan focuses on electricity use, water consumption, building local jobs and training people, investing in local infrastructure, and the long-term skills development of the community surrounding its A…