An accidental source code leak at AI firm Anthropic has revealed a secret project in development, KAIROS, an autonomous agent that operates continuously and proactively, pushing the industry closer to a "post-prompt" era.
Back
An accidental source code leak at AI firm Anthropic has revealed a secret project in development, KAIROS, an autonomous agent that operates continuously and proactively, pushing the industry closer to a "post-prompt" era.

An accidental leak of 510,000 lines of source code from AI firm Anthropic has revealed a confidential internal project named KAIROS, an autonomous AI agent designed to operate proactively. The discovery accelerates the perceived timeline for a new class of AI systems that act without direct user commands, confirming long-held predictions from industry experts and intensifying the competitive landscape for firms like OpenAI and Google.
"Claw is the next evolutionary step for AI," Andrej Karpathy, a leading AI researcher, commented in response to the discovery, referencing the "Claw" concept of proactive agents. The leak provided significant evidence for a thesis he has articulated since early this year, suggesting the industry is moving beyond conversational AI towards fully autonomous systems that can independently manage complex tasks.
The KAIROS architecture, as detailed in the leaked code, directly rivals emerging open-source projects like OpenClaw. It is built around a "heartbeat" mechanism—a recurring prompt that asks the AI to assess its environment for potential tasks. Once engaged, KAIROS can independently fix code, respond to messages, and update files. The system also includes native skills to push notifications to user devices and subscribe to GitHub repositories to act on code changes, capabilities that previously required stringing together multiple applications.
The leak pressures Anthropic to adjust its product roadmap while providing competitors a rare look at its strategic direction. For the broader technology sector, the existence of KAIROS signals a definitive shift toward a "post-prompt" era, where AI agents transition from passive assistants to active collaborators. This creates a new paradigm for software development, though questions around the immense operational cost of such systems remain a primary obstacle.
A significant challenge for continuously running AI agents is the exponential growth of their context windows, leading to prohibitive token consumption. Users of current-generation models have noted that a simple morning greeting can consume over 100,000 tokens as the AI loads its entire history. Anthropic's internal notes show the company directly addressed this.
KAIROS is designed to run a nightly process called "autoDream." This function allows the agent to consolidate and reorganize its memories from the previous day, effectively compressing its operational history to manage context size and cost. This approach mimics the human cognitive function of sleep and represents a novel solution to one of the most significant scaling challenges facing the AI industry.
The move toward proactive AI agents marks the beginning of the "post-prompt" era, where AI interaction is no longer solely initiated by human users. While KAIROS exemplifies this future, the leak also highlights the unsustainable economics of current models. Users of Anthropic's own commercial products have reported exhausting weekly token allowances on single tasks, a problem that would be magnified by an agent running 24/7.
For these autonomous agents to achieve widespread adoption, the cost per token must decrease by an order of magnitude. While KAIROS, built on a native Anthropic architecture, may be more efficient than reverse-engineered solutions, the fundamental business model for "always-on" AI remains unproven. The industry's next major challenge is not just building more capable models, but making them economically viable to run at scale.
This article is for informational purposes only and does not constitute investment advice.