Andrej Karpathy on the End of Coding, AutoResearch, and the Loopy Era of AI
OpenAI co-founder Andrej Karpathy sat down with No Priors to discuss why he hasn’t written a line of code since December, how his AutoResearch project ran 700 experiments in two days, and why the era of agentic engineering changes everything.
In a wide-ranging 66-minute conversation on the No Priors podcast with Sarah Guo, OpenAI co-founder and AI educator Andrej Karpathy laid out his vision for what he calls the “loopy era” of AI — a phase where autonomous agents don’t just assist with coding but run entire research loops, discover novel optimizations, and fundamentally reshape how software gets built. Here are the key takeaways.
The December Shift: From Coding to Orchestrating
Karpathy revealed that he hasn’t written a line of code since December 2025. Instead, he spends 16 hours a day directing AI agents — assigning tasks to multiple parallel workers across his codebase, reviewing their output, and steering high-level decisions. He describes the experience as a kind of productive disorientation, constantly exploring what’s newly possible as agent capabilities cross what he calls a “coherence threshold.”
The shift, he says, isn’t just about individual productivity. When the cost of producing working code drops toward zero, everything priced on code-production cost gets repriced: team structures, product scoping timelines, hypothesis-testing cycles, and the skills engineers actually need.
AutoResearch: 700 Experiments, Two Days, Zero Humans
Karpathy’s open-source AutoResearch project is the logical extreme of agentic leverage. Using a single markdown prompt and roughly 630 lines of training code on one GPU, the system autonomously ran 700 experiments in two days, discovering 20 optimizations that improved LLM training — including hyperparameter tunings Karpathy himself had missed despite two decades of manual optimization experience.
When he applied those 20 tweaks to a larger language model, the result was an 11% speedup in training time. When Shopify CEO Tobias Lütke tried AutoResearch on internal company data, it ran 37 experiments overnight and delivered a 19% performance gain.
The next frontier, Karpathy says, is meta-optimization: agents that don’t just optimize code but optimize the research program itself — iteratively improving their own experimental methodology.
The Loopy Era and Second-Order Effects
Karpathy frames the current moment as the “loopy era” — where agents run continuous self-improvement loops on code and research. The second-order effects go far beyond faster coding:
- Software becomes ephemeral. When natural language can conjure any tool, entire categories of apps become redundant. Karpathy replaced six home-automation apps with a single agentic entity (“Dobby”) that controls lights, HVAC, security, and pool systems through conversation.
- APIs become the product. If the customer is no longer a human clicking buttons but an agent calling endpoints, software should expose raw capabilities rather than polished UIs.
- Model speciation. Rather than forcing monolithic models to handle everything, Karpathy advocates for specialized intelligences — smaller models optimized for specific domains, much like nature evolved specialized brains for different ecological niches.
On Model “Jaggedness”
Despite their power, current models remain deeply uneven. Karpathy notes they excel at verifiable, reward-optimized tasks like code generation and kernel optimization, but fail at softer domains — joke-telling, for instance, remains stuck on stale material. His assessment: models simultaneously feel like talking to a brilliant PhD student and a 10-year-old, depending on the task.
Open Source: 8 Months Behind and Closing
Karpathy places the open-source gap at roughly 8 months behind frontier models, down from 18 months and still compressing. He compares open-source AI to Linux: the industry demands a common open platform, and businesses will fund its development. Basic agentic capabilities — home automation, coding assistance, task management — should be achievable with open models within one to three years.
Jobs, Education, and MicroGPT
The interview touched on workforce implications, with Karpathy citing LinkedIn’s 2026 Economic Graph showing a 15% decline in entry-level programming roles since 2024. But his stance isn’t doom and gloom — he believes the right response is rethinking education entirely.
His project MicroGPT, a full GPT training implementation in 200 lines of pure Python, demonstrates the approach: rather than teaching people to code, teach agents to teach. A structured curriculum (“skill”) lets an agent teach each person at their level, in their language, with infinite patience. The role of human teachers shifts from content delivery to designing learning frameworks that agents can execute.
The Bottom Line
Karpathy’s core principle: remove yourself as the bottleneck. Maximum leverage comes from initial setup that requires minimal ongoing intervention while autonomous systems operate continuously. The caveat is that these approaches only work for domains with evaluable metrics — soft, subjective domains remain problematic until better optimization frameworks emerge.
The full interview is available on the No Priors YouTube channel and all major podcast platforms.
Key Takeaways
- Agentic engineering is here — Karpathy hasn’t typed code since December 2025, instead orchestrating multiple AI agents full-time
- AutoResearch ran 700 experiments in 2 days on a single GPU, finding optimizations a 20-year veteran missed
- Open source is ~8 months behind frontier and closing — Karpathy compares it to the Linux trajectory
- Entry-level programming roles down 15% since 2024, but new AI-native roles are emerging
- The “loopy era” means agents improving themselves in continuous loops — soon standard at frontier labs