Karpathy: “Coding” Is No Longer the Right Verb

by 6 min read

Andrej Karpathy says he has basically not handwritten a single line of code since December 2024.

This OpenAI founding member, former Tesla AI director, is now an independent AI researcher and founder of Eureka Labs. He has 1.9 million followers on X, and every post he makes creates ripples in the AI community. Recently, he did two things that shook the community:open-sourced AutoResearch (letting AI Agents automatically run experiments to optimize model training) andreleased MicroGPT (a full GPT implementation in 200 lines of pure Python).

He recently appeared on Sarah Guo’s No Priors podcast, discussing how coding Agents are changing engineers’ daily work, AutoResearch and recursive self-improvement, the “jaggedness” of AI, the power balance between open-source and closed-source, the pace difference between the physical and digital worlds, and the future of education in the Agent era.

Note: Sarah Guo is the founder of the AI investment fund Conviction and a former partner at Greylock. No Priors is the AI podcast she co-hosts with Elad Gil; this episode was an independent interview by Sarah.


Original video link: https://www.youtube.com/watch?v=kwSVtQ7dziU

  • • Karpathy has basically stopped handwriting code since December 2024. His work mode has shifted from “writing code” to “directing Agents to work,” with multi-Agent parallelism becoming the norm.The core anxiety has shifted from GPU utilization to Token throughput.
  • • He let AutoResearch run overnight anddiscovered optimizations he hadn’t noticed after two years of manual parameter tuning—this validates the core concept of “removing the human from the loop.”
  • • AI models are advancing rapidly in verifiable domains (code, mathematics), but arealmost stagnant in non-verifiable domains—the jokes ChatGPT told three years ago are still the same today.
  • • The gap between open-source models and the closed-source frontier hasnarrowed from 18 months to 6-8 months. He believes “centralization has a poor historical record” and hopes for more labs and open platforms.
  • • The digital space will undergo large-scale transformation first, followed by the digital/physical interface, and finally the physical world—“Atoms are a million times harder than bits.”
  • • The future of education is not explaining to people, butexplaining to Agents—his MicroGPT Agent fully understands but cannot come up with it on its own. “What the Agent cannot do is your job.”

“Coding” Is No Longer the Right Verb

Sarah Guo said she once walked into the office and saw Karpathy completely immersed in his work. She asked him what he was doing, and he said: “I have to spend 16 hours a day directing my Agents to work.”

Coding is not even the right verb anymore. I have to express my will to my agents for 16 hours a day.
(“Code’s not even the right verb anymore. I have to express my will to my agents for 16 hours a day.”)

Karpathy said he has been in a state he calls“AI psychosis”.December 2024 was a watershed moment. At that point, he flipped from “writing 80% himself, Agent writing 20%” to “writing 20% himself, Agent writing 80%”. He said the ratio is probably even more extreme now; since December, he has basically not handwritten a single line of code.

He tried explaining this to his parents but felt ordinary people are completely unaware of how drastic this change is. If you look at any software engineer’s workstation, you’ll find their default workflow has been completely different since December 2024.

Sarah added a scenario: the engineering team at Conviction where she works, none of them handwrite code. The engineers all wear microphones and whisper to their Agents. She said she initially thought they were crazy, but later realized they were just ahead of the curve.

So what is the bottleneck for projects now? Karpathy saideverything feels like a “skill issue”—not a lack of capability, but you haven’t figured out how to string existing capabilities together. The Agent instructions aren’t written well enough, memory tools aren’t mature enough—when problems arise, it always feels like you didn’t get it right.

He mentioned a famous photo by Peter Steinberger: a screen densely covered with windows of Codex Agents.

Note: Peter Steinberger is an Austrian developer, founder of the open-source Agent project OpenClaw. OpenClaw is an autonomous AI agent controllable via messaging platforms like WhatsApp, Telegram, etc. It gained over 240k stars on GitHub in early 2026, after which Steinberger joined OpenAI in February 2026. Codex is OpenAI’s programming Agent product.

Peter’s working method: each Agent in high-effort mode takes about 20 minutes to complete a task. He switches between a dozen code repositories simultaneously, assigning work to different Agents.The unit of operation is no longer “writing a line of code” or “adding a function,” but “assign this feature to Agent 1, that non-conflicting feature to Agent 2,” then reviewing the results based on how much you care about code quality.

Karpathy used an analogy. When he was a PhD student, he would get anxious if GPUs were idle and not running experiments—that meant wasted computing power.Now it’s not GPUs, it’s Tokens.

What is your token throughput and what token throughput do you command?
(“What is your token throughput and what token throughput do you command?”)

He made an interesting observation: for at least the past decade, people didn’t feel constrained by computing power in many engineering tasks. But with this capability leap, you suddenly realizethe constraint is no longer computational resources, but yourself. Sarah said this is actually exhilarating—because you can get better, it becomes addictive.

The Soul of an Agent—Why Personality Design Matters

Sarah asked: If everyone spends 16 hours honing their skills in using programming Agents, what would “mastery” look like a year from now?

Karpathy said everyone is moving to a higher level. It’s not about single Agents anymore—it’s about how multiple Agents collaborate and form teams. He proposed a concept calledClaw, something more “persistent” than a regular Agent: it has its own sandbox, a more mature memory system, and runs in a loop even when you’re not watching.

He believes OpenClaw’s memory system is much more mature than default Agent tools. The default memory mechanism simply compresses when the context window is full, while OpenClaw has a more refined scheme.

Then Karpathy discussed a topic many are interested in:Agent personality design.

He said Peter Steinberger innovated in at least five directions simultaneously on OpenClaw—memory system, tool access, continuous looping, WhatsApp unified interface—one particularly important but often overlooked aspect is the<