AI, why does it also need to sleep?

On March 31, 2026, Anthropic accidentally packaged the wrong files and leaked 510k lines of Claude Code’s source code into a public npm repository. Within hours, the code was mirrored to GitHub, and it could never be taken back.

There’s a lot of leaked content—security researchers and competitors each took what they needed. But among all the unreleased features, one name sparked widespread discussion—autoDream, automatic dreaming.

autoDream is part of a background daemon system called KAIROS (from Ancient Greek, meaning “the right moment”).

KAIROS continuously observes and records while the user is working, maintaining each day’s log (a bit of a lobster vibe). autoDream, on the other hand, starts only after the user shuts down the computer, organizing the memories accumulated during the day, clearing contradictions, and turning vague observations into confirmed facts.

The two together form a complete cycle: KAIROS is awake, autoDream is asleep—Anthropic’s engineers built a schedule for their AI.

Over the past two years, the hottest narrative in the AI industry has been Agent: autonomous operation that never stops—treated as AI’s core advantage over humans.

But the company that pushes Agent capabilities the deepest has, in its own code, set a time for the AI to rest.

Why?

The cost of never stopping

An AI that never stops will hit a wall.

Every large language model has a “context window”—there’s a physical limit to how much information it can handle at any given moment. When an Agent runs continuously, project history, user preferences, and conversation records keep piling up; once it passes the critical point, the model starts forgetting early instructions, contradicting itself, and fabricating facts.

The technical community calls this “context corruption.”

Many Agent mitigation strategies are crude: shove all history into the context window and expect the model to figure out what matters on its own. The more information there is, the worse the performance becomes.

What the human brain hits is the same wall.

Everything experienced during the day gets rapidly written into the “hippocampus.” This is a temporary storage area with limited capacity—more like a whiteboard. Real long-term memory is stored in the “neocortex,” with large capacity but slow writing.

The core job of human sleep is to empty the overloaded whiteboard and move useful information onto the hard drive.

At the Neuroscience Center of the University of Zurich, Björn Rasch’s lab named this process “active systems consolidation.”

Long-running experiments involving sleep deprivation have repeatedly shown that a non-stop brain does not become more efficient. Memory first starts to fail, then attention, and finally even basic judgment breaks down.

Natural selection is extremely harsh on inefficient behavior, but sleep was not eliminated. From fruit flies to whales, almost all animals with nervous systems sleep. Dolphins evolved a “half-brain sleep” strategy, resting their left and right brains in turns—they would rather invent a brand-new way to sleep than give up sleep itself.

Orcas, beluga whales, and bottlenose dolphins resting at the bottom of a pool|Image source: National Library of Medicine (United States)

The constraints faced by the two systems are the same: limited immediate processing capability, but infinite expansion of historical experience.

Two answer sheets

In biology, there’s a concept called convergent evolution: species that are distantly related, yet—under similar environmental pressures—independently evolve similar solutions. The most classic example is the eye.

Octopuses and humans both have camera-like eyes. A focus-adjustable lens concentrates light onto the retina; a ring-shaped iris controls how much light enters; and the overall structure is almost the same.

Comparison of octopus and human eye structure|Image source: OctoNation

But an octopus is a mollusk and a human is a vertebrate. Their common ancestor lived more than 510k years ago, when there were no complex visual organs on Earth. Two completely independent evolutionary pathways led to almost the same endpoint. Because to convert light into a clear image efficiently, the paths permitted by physics are almost only camera-like: a lens that can focus, a photosensitive surface that can receive images, and an aperture that can adjust the amount of incoming light—none of the three can be missing.

The relationship between autoDream and human sleep is probably of this kind: under similar constraints, the two types of systems may converge on similar structures.

Going offline is the one shared point that makes them most similar.

autoDream can’t run while the user is working. It starts independently as a forked subprocess, completely isolated from the main thread, with strictly limited tool permissions.

Humans face the same problem, but their solution is more thorough: memory moves from the hippocampus (temporary storage) to the neocortex (long-term storage), requiring a set of brainwave rhythms that appear only during sleep.

Most crucial is the hippocampal sharp-wave ripple, which packages the memory fragments encoded that day one by one and sends them to the cerebral cortex. Slow oscillations in the cortex and spindle waves from the thalamus provide precise timing coordination for the whole process.

This rhythm can’t form when you’re awake; external stimulation disrupts it. So you don’t fall asleep because you’re sleepy—rather, the brain has to close the front door before it can open the back door.

Or put another way, within the same time window, information intake and structural organization are competing resources, not complementary ones.

Active systems consolidation model during sleep. A (data migration): During deep sleep (slow-wave sleep), memories newly written to the “hippocampus” (temporary storage) are replayed repeatedly, so they are gradually transferred and consolidated into the “neocortex” (long-term storage). B (transmission protocol): This data transfer process depends on “communication” highly synchronized between the two regions. The cerebral cortex sends slow brain waves (red line) as the main tempo. Driven by the peaks, the hippocampus packages memory fragments into high-frequency signals (sharp-wave ripples at the green line), perfectly coordinated with the carrier wave generated by the thalamus (spindle wave at the blue line). It’s like precisely embedding high-frequency memory data into the gaps of the transmission channel, ensuring the information is synchronously uploaded to the cerebral cortex.|Image source: National Library of Medicine (United States)

The other approach is not full memory, but editing.

When autoDream starts, it won’t keep all logs. It first reads existing memories to confirm known information, then scans each day’s log in KAIROS, focusing on parts where prior understanding deviates: things that were different from what was said yesterday, and memories that are more complex than previously assumed, are prioritized for recording.

After organization, the memories are stored in a three-layer index: a lightweight pointer layer is always loaded, topic files are brought in on demand, and the full history is never directly loaded. And facts that can be found directly in the project code (for example, which file defines a particular function) are simply not written into memory.

What the human brain does during sleep is essentially the same thing.

A study by Erin J Wamsley, a lecturer at Harvard Medical School, shows that sleep prioritizes consolidating unusual information—for example, things that surprise you, things connected to emotional swings, and things related to problems that haven’t yet been solved. Large amounts of repeated, featureless daily details are discarded, leaving only abstract rules—you might not remember exactly what you saw on the way to work yesterday, but you know the route.

Interestingly, in one place the two systems make different choices. The memories produced by autoDream are explicitly labeled in the code as “hint” (clue) rather than “truth.” Before an agent uses them each time, it has to re-verify whether they still hold, because it knows what it organized might not be accurate.

Humans don’t have this mechanism. This is why eyewitnesses in court often give incorrect testimony. They aren’t lying on purpose—memory is assembled temporarily from scattered fragments in the brain, so getting things wrong is the norm.

Evolution probably doesn’t need to put an “uncertainty” label on the human brain. In a primitive environment where the body must respond quickly, believing memory lets you act immediately, while doubting memory makes you hesitate—and hesitation means you lose.

But for an AI that repeatedly makes knowledge-based decisions, the cost of verification is low, and blind confidence is dangerous.

Two scenarios, two different answer sets.

Smarter laziness

In evolutionary biology, convergent evolution means two independent routes, without direct information exchange, leading to the same endpoint. There is no copying in nature, but engineers can read papers.

When Anthropic designed this sleep mechanism, was it because they hit the same physical wall as the human brain, or did they refer to neuroscience from the beginning?

There are no neuroscience references in the leaked code, and the name autoDream is also closer to a programmer’s joke. The stronger drive is likely still the engineering constraints themselves: the context has hard limits, long-running leads to accumulated noise, and organizing online pollutes the main thread’s reasoning. They’re solving an engineering problem; biomimicry was never the goal.

What truly determines the shape of the answer is the compressive force of the constraints themselves.

Over the past two years, the AI industry’s definition of “stronger intelligence” has almost always pointed in the same direction—bigger models, longer context, faster reasoning, and 7×24-hour uninterrupted operation. The direction is always “more.”

The existence of autoDream hints at a different proposition: smart agents might be lazier.

An agent that never stops to organize itself won’t become smarter and smarter—it will only become more and more chaotic.

In hundreds of millions of years of evolution, the human brain reached a seemingly clumsy conclusion: intelligence needs a rhythm. Wakefulness is for perceiving the world; sleep is for understanding the world. When an AI company independently arrives at the same conclusion while solving an engineering problem, it may be implying that:

Intelligence has some basic overhead you can’t get around.

Maybe an AI that never sleeps isn’t a stronger AI. It’s just an AI that hasn’t realized it needs to sleep yet.

Source of this article: Jike Park

Risk warning and disclaimer terms

        There are risks in the market; invest cautiously. This article does not constitute personal investment advice, and it has not considered the specific investment goals, financial situations, or needs of individual users. Users should consider whether any opinions, viewpoints, or conclusions in this article align with their specific circumstances. You invest at your own risk and bear responsibility accordingly.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin