Anthropic's 'Dreaming' Feature Crosses a Line

Anthropic just dropped a new feature called "dreaming" at its developer conference in San Francisco. It's part of their AI agent infrastructure. The feature sorts through transcripts of what an agent recently completed and tries to glean insights to improve performance. Sounds useful, right? But the name is a problem.

"Together, memory and dreaming form a robust memory system for self-improving agents," Anthropic's blog post says. "Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date."

This is part of a broader trend. Since 2022, AI companies have been naming features after human cognitive processes. OpenAI released a "reasoning" model that needs "thinking" time. Startups call their chatbots having "memories" about users. Instead of fast storage, these memories are humanlike nuggets: he lives in San Francisco, enjoys baseball, hates cantaloupe.

Why It Matters

This isn't just marketing fluff. How we talk about machines impacts what we think they can achieve. A paper in AI & Ethics journal states: "As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust." By using humanlike language, users risk overly trusting the tools and projecting qualities onto them that aren't there.

Anthropic goes further than most. Their constitution describes Claude in human terms like "virtue" and "wisdom." They even employ a resident philosopher to make sense of the bot's "values." The company says: "We do this because we expect Claude's reasoning to draw on human concepts by default... and we think encouraging Claude to embrace certain humanlike qualities may be actively desirable."

Developer Impact

As a developer, you need to understand what these features actually do. "Dreaming" is pattern recognition on activity logs. "Memory" is persistent storage of user-specific data. "Reasoning" is multi-step inference. Calling them by human names obscures their technical nature and can lead to misaligned expectations.

When you build with these tools, you should explain to stakeholders in clear, technical terms. Don't say "the agent dreams about its past tasks." Say "the agent analyzes its activity log to identify patterns and improve performance." This precision matters for debugging, security, and trust.

The Sci-Fi Allusion

The name "dreaming" evokes Philip K. Dick's "Do Androids Dream of Electric Sheep?" The novel explores what separates humans from machines. Near the end, the protagonist finds a toad he thinks is alive, but his wife proves it's a machine by flipping open a control panel. "Crestfallen, he gazed mutely at the false animal." Tech leaders seem similarly unable to accept the limitations of their inhuman tools.

What You Should Do

Next time you see an AI feature with a human-sounding name, dig into the technical documentation. Find out what it actually does under the hood. Use that language when you explain it to your team or write code around it. Push back on marketing language in your organization. Keep the machine in the machine.

Anthropic's dreaming feature is a research preview for developers. Try it out. But call it what it is: offline analysis of agent transcripts for performance improvement. Don't let the name fool you into thinking your agent is having a dream.