AI News Roundup: DeepSeek V4 Goes Open Source, Google Drops Gemma 4, OpenAI Closes $122B Round
DeepSeek releases a trillion-parameter open-source model, Google launches Gemma 4 under Apache 2.0, OpenAI closes the largest funding round in history at $122B, and California sets a national testing ground for AI regulation.
DeepSeek V4: A Trillion-Parameter Open-Source Challenger
Chinese AI lab DeepSeek has released DeepSeek V4, a one-trillion-parameter Mixture-of-Experts model under the Apache 2.0 license — making it the largest fully open-weight model ever released. Despite its massive scale, the MoE architecture activates only ~37 billion parameters per token, keeping inference costs surprisingly manageable.
V4 ships with a 1-million-token context window, native multimodal capabilities spanning text, image, and video, and scores 81% on SWE-bench. Perhaps most notable is the training cost: an estimated $5.2 million, a fraction of the $100M+ budgets behind comparable frontier models from OpenAI and Google. The model also runs on Huawei Ascend chips, marking the first credible trillion-parameter model that doesn’t depend on NVIDIA silicon.
At roughly $0.30 per million tokens, V4 is positioned to undercut every major commercial API. Open-source advocates are calling it the most significant release since Llama 2, and it puts real competitive pressure on GPT-5.4 and Gemini 3.1 Pro at a fraction of the cost.
Google Launches Gemma 4 Under Apache 2.0
Google DeepMind released Gemma 4 on April 2 — four open-weight models built from the same research behind Gemini 3, all under the Apache 2.0 license. The family includes 2B, 4B, 26B Expert Mix, and 31B Dense variants, with context windows up to 256K tokens and native vision and audio processing.
The 26B-A4B variant uses a Mixture-of-Experts architecture with 128 small experts, activating only 3.8 billion parameters per forward pass — achieving roughly 97% of the dense 31B model’s quality at a fraction of the compute. The 31B Dense model currently sits at #3 on LMArena’s text-only leaderboard with an ELO of ~1452.
What makes Gemma 4 stand out is its edge deployment story: the smaller models run completely offline on phones, Raspberry Pi, and NVIDIA Jetson devices with near-zero latency. Google is directly challenging Meta’s Llama dominance in the open-source space, and the Apache 2.0 license removes the usage restrictions that limited some Llama deployments.
OpenAI Closes Record $122 Billion Funding Round
OpenAI has closed the largest venture funding round in history at $122 billion, valuing the company at $852 billion. Amazon led with up to $50 billion, NVIDIA invested $30 billion, and SoftBank committed another $30 billion. Roughly $3 billion came from retail investors — a first for a pre-IPO AI company of this scale.
The company is now generating over $2 billion per month in revenue, with annualized revenue crossing $25 billion at the end of February. Enterprise now accounts for 40% of revenue, up from 30% last year, and is on track to reach parity with consumer by year-end. An IPO could land as soon as Q4 2026.
However, OpenAI is still burning cash at a staggering rate — projected losses of $14 billion in 2026 driven by compute costs, research spending, and infrastructure buildout. The company is projecting $280 billion in total revenue by 2030, a target that would require more than 10x growth in four years.
OpenAI Retires GPT-4o from All Plans
As of April 3, 2026, OpenAI has fully retired GPT-4o from all ChatGPT plans, completing a phase-out that began in February. GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini have all been removed from ChatGPT, though they remain available through the API with no announced deprecation timeline.
The retirement reflects how rapidly the industry has moved: only 0.1% of users were still choosing GPT-4o daily, with the vast majority having shifted to GPT-5.2. OpenAI extended access for Business, Enterprise, and Edu customers with Custom GPTs through April 3, but that window has now closed. It’s a milestone that shows how quickly frontier AI models become legacy products — GPT-4o launched just two years ago.
California Sets National Testing Ground for AI Regulation
Governor Gavin Newsom signed a first-of-its-kind executive order on March 30 establishing AI guardrails for state procurement contracts. Companies seeking to do business with California must now disclose safeguards against misuse, including protections against child exploitation, unlawful discrimination, and civil rights violations.
The most provocative provision: California’s Chief Information Security Officer can now independently review and overrule federal supply-chain risk designations of AI companies. This is a direct response to the Pentagon’s recent classification of Anthropic as a supply-chain risk — California reserves the right to keep buying from companies the federal government tries to block.
Meanwhile, the state legislature is advancing a sweeping AI chatbot bill for protecting minors and several other AI-focused measures. With the Trump administration pushing for a national standard that would preempt state-level AI laws, California is positioning itself as the de facto regulatory benchmark — and AI companies are likely to treat the state’s rules as the floor nationwide.
Open-Source AI Hits an Inflection Point
This week saw an unprecedented wave of open-source AI releases: Google’s Gemma 4, DeepSeek V4, PrismML’s 1-bit Bonsai, H Company’s Holo3, and Arcee’s Trinity all shipped under Apache 2.0 in the same week, covering every device class from phones to data centers.
The trend is clear: open-source models are rapidly closing the gap with proprietary frontier systems, and the cost to train competitive models is dropping fast. With DeepSeek training a trillion-parameter model for $5.2 million and Google releasing Gemma models that run on a Raspberry Pi, the barriers to entry for building on top of frontier-quality AI have never been lower.
By the Numbers
- $122B — OpenAI’s record-breaking funding round, valuing the company at $852 billion
- 1 trillion — parameters in DeepSeek V4, the largest open-weight model ever released
- $5.2M — estimated training cost for DeepSeek V4, vs. $100M+ for comparable proprietary models
- 0.1% — daily GPT-4o usage rate at the time of its retirement from ChatGPT
- $25B+ — OpenAI’s annualized revenue as of February 2026
- ~1452 — Gemma 4 31B’s ELO score, ranking #3 on LMArena’s open-model leaderboard
What to Watch This Week
- DeepSeek V4 benchmarks — independent evaluations are rolling in; watch for coding and reasoning results that could reshape the open-vs-proprietary debate
- OpenAI IPO signals — with the $122B round closed and $852B valuation, watch for S-1 filing preparations and banker appointments in Q2
- California AI legislation — the state legislature is advancing multiple AI bills alongside Newsom’s executive order; expect industry lobbying to intensify
- Georgia AI bills — Gov. Kemp has until Monday April 6 to sign or veto three AI-related bills covering chatbot disclosure, child safety, and healthcare AI