Andrej Karpathy Says He’s ‘AI Psychotic’ After 16 Hours Daily Agent Conversations
In a recent hour‑long podcast, Andrej Karpathy describes how he stopped writing code, spent 16 hours a day dialoguing with AI agents, feels anxious when tokens go unused, and envisions agents becoming the new operating system that reshapes software, research, and everyday life.
Andrej Karpathy appeared on a podcast to explain a rapid shift in his workflow: since December he has virtually stopped hand‑coding, spending about 16 hours each day conversing with multiple AI agents, a state he calls "AI psychosis" because unused token capacity makes him uneasy.
He argues that agents will become the core of software production, turning devices into an API‑only ecosystem where a few prompt sentences in a chat (e.g., WhatsApp) can orchestrate audio, lighting, climate, and security systems. In this vision the user is replaced by an agent that acts on their behalf, and even research organizations become collections of markdown‑defined roles, processes, and code that can be continuously optimized.
Karpathy illustrates the paradigm with a personal home‑automation project called Dobby: by asking an agent to locate his Sonos system, the agent scanned the local network, discovered the device without a password, built an API wrapper, and played music on command—all via natural‑language prompts sent through WhatsApp.
He discusses the anxiety of not fully utilizing token throughput, comparing it to a GPU that sits idle, and stresses that the real bottleneck is now the human who must craft better prompts, configure memory tools, and schedule multiple agents to work in parallel.
The conversation moves to "OpenClaw" and other agent‑first systems, highlighting differences in memory handling and the desire for agents that run continuously in sandboxed environments rather than requiring constant supervision.
Karpathy describes "automation research": defining objectives, metrics, and constraints, then letting a loop of agents run experiments autonomously. He shares an example where a model he thought was well‑tuned was further improved overnight by the automated system, which discovered missing weight‑decay for value embeddings.
On the broader impact, he notes that while software engineering demand may rise as production costs fall (a Jevons‑paradox effect), the employment landscape remains uncertain; AI tools currently accelerate specific tasks but are not yet a universal replacement.
He compares open‑source and closed‑source LLMs, observing that open models lag the frontier by a few months but are rapidly closing the gap, and argues that a healthy ecosystem needs both accessible open models and cutting‑edge closed models to avoid concentration of power.
Finally, Karpathy outlines a three‑stage future: massive efficiency gains in the digital world, followed by tighter integration between digital agents and physical devices, and eventually full‑scale physical automation, with the agent‑first approach driving each phase.
Machine Learning Algorithms & Natural Language Processing
Focused on frontier AI technologies, empowering AI researchers' progress.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
