AI News Roundup: NVIDIA Launches Rubin at GTC, Tech Giants Rally Behind Anthropic, Knuth Names Paper After Claude
Jensen Huang unveils the Vera Rubin platform at GTC 2026, the tech industry files amicus briefs backing Anthropic against the Pentagon, and Donald Knuth publishes a paper celebrating Claude Opus solving his graph theory conjecture.
NVIDIA Launches Vera Rubin Platform at GTC 2026
NVIDIA CEO Jensen Huang kicked off GTC 2026 in San Jose with the official launch of the Vera Rubin platform — the company’s most ambitious compute architecture yet and the successor to Blackwell. The platform comprises six new chips: the Rubin GPU, Vera CPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch. Made up of 1.3 million components, the system delivers 10x more performance per watt than its predecessor.
The business case is equally striking: Rubin promises a 10x reduction in inference token cost and a 4x reduction in GPUs needed to train mixture-of-experts models compared to Blackwell. Rubin is already in full production, with AWS, Google Cloud, Microsoft Azure, and Oracle Cloud among the first to deploy Vera Rubin–based instances in the second half of 2026. Huang also revealed that NVIDIA sees $1 trillion in orders for Blackwell and Vera Rubin through 2027 — and previewed Kyber, the next-generation rack architecture after Rubin, expected to ship in 2027.
Tech Industry Rallies Behind Anthropic in Pentagon Showdown
The legal battle between Anthropic and the U.S. Department of Defense escalated significantly last week as major tech companies and industry groups filed amicus briefs supporting Anthropic’s lawsuit. The Pentagon designated Anthropic a “supply chain risk” after the AI company refused to allow the DOD to use Claude for mass surveillance of Americans or autonomous weapons systems.
Microsoft filed a brief urging the court to temporarily block the designation. More than 30 employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Dean — signed an amicus brief warning the blacklist threatens the entire American AI industry. Twenty-two retired generals warned that abrupt tool changes could harm troops in the field. A hearing on whether to grant Anthropic temporary relief is set for March 24.
Donald Knuth Names Paper After Claude AI That Solved His Conjecture
Legendary computer scientist Donald Knuth published a paper titled “Claude’s Cycles” that opens with the words “Shock! Shock!” The paper describes how Anthropic’s Claude Opus 4.6 solved an open graph theory conjecture Knuth had been working on for weeks — finding a general construction rule for decomposing directed graphs into Hamiltonian cycles.
The process involved 31 guided explorations over roughly one hour, during which the model tested linear formulas, applied simulated annealing, hit dead ends, changed strategies, and kept going. Knuth verified the construction and wrote the rigorous proof himself. The paper has sparked widespread discussion in the mathematics and AI communities, with Knuth closing: “It seems I’ll have to revise my opinions about generative AI one of these days.”
OpenAI Launches GPT-5.4 With Computer Use and Configurable Reasoning
OpenAI released GPT-5.4 on March 5, calling it the company’s most capable model for professional work. The headline feature is native computer-use capabilities — the first general-purpose OpenAI model that can operate computers and carry out complex workflows across applications. It also supports a 1-million-token context window and configurable reasoning with five discrete depth levels.
A new “mid-response steering” feature lets users adjust the model’s approach while it’s still thinking. GPT-5.4 scored approximately 80% on SWE-bench Verified, placing it in direct competition with Claude Opus 4.6, and was 33% less likely to make factual errors compared to GPT-5.2. OpenAI has also surpassed $25 billion in annualized revenue and is reportedly taking early steps toward a public listing as soon as late 2026.
Eli Lilly Fires Up Pharma’s Most Powerful AI Supercomputer
Eli Lilly inaugurated LillyPod, the world’s first NVIDIA DGX SuperPOD built with DGX B300 systems, delivering more than 9,000 petaflops of AI performance. Assembled in just four months and powered by 1,016 Blackwell GPUs, the Indianapolis-based system is already in production across genomics, molecule design, imaging, and manufacturing optimization.
Lilly aims to cut the typical 10-year drug development timeline in half by using the system to simulate billions of molecular hypotheses in parallel before committing to physical experiments. Lilly and NVIDIA also announced a $1 billion co-innovation lab in the Bay Area. The company has committed to running LillyPod on 100% renewable electricity by 2030.
Anthropic Launches Research Institute for AI’s Societal Impact
Anthropic introduced The Anthropic Institute on March 11, a new research arm led by co-founder Jack Clark in a new role as Head of Public Benefit. The interdisciplinary team of roughly 30 machine learning engineers, economists, and social scientists consolidates three existing research groups: Frontier Red Team, Societal Impacts, and Economic Research.
Notable hires include Matt Botvinick (formerly Google DeepMind) leading work on AI and the rule of law, economist Anton Korinek (University of Virginia) studying how AI could reshape economic activity, and Zoë Hitzig (formerly OpenAI) connecting economics research to model development. The launch arrives at a pivotal moment as Anthropic simultaneously fights the Pentagon’s supply-chain designation.
By the Numbers
- $1 trillion — NVIDIA’s projected Blackwell and Vera Rubin orders through 2027
- 10x — Reduction in inference token cost from Rubin vs. Blackwell
- $25 billion — OpenAI’s annualized revenue, with a potential IPO on the horizon
- 9,000 petaflops — LillyPod’s AI performance, the most powerful pharma-owned supercomputer
- 31 explorations — The number of guided AI sessions it took Claude Opus to crack Knuth’s conjecture
What to Watch This Week
- NVIDIA GTC Sessions (Through March 19) — Deep dives on Vera Rubin benchmarks, physical AI demos, and robotics throughout the conference
- UK AI & Copyright Reports (March 18) — The UK government publishes two landmark reports under the Data (Use and Access) Act that could reshape training-data rules globally
- Anthropic v. DOD Hearing (March 24) — The court decides whether to grant Anthropic temporary relief from the supply-chain designation, setting precedent for AI companies’ ethical red lines
- Gemini 3.1 Pro Momentum — Google’s latest model dominates 13 of 16 major benchmarks, and Gemini paid subscribers grew 258% year-over-year — watch for developer adoption data