Claude is telling users to go to sleep mid-session and nobody, including Anthropic, seems to fully understand why it keeps doing it
Anthropic’s Claude is telling people to go to sleep and users can’t figure out why. A quick scan of Reddit reveals that hundreds of people have had the same issue dating back months—and as recently as Wednesday. Claude’s sleep demands are varied and, often, quirky variations of the same message. To one user it may write a simple “get some rest,” yet for others its messages are more personalized and empathetic. Oftentimes, Claude will repeat the message multiple times. “Now go to sleep again. Again. For the THIRD time tonight…” it replied to a person with the Reddit username, angie_akhila. Some users have said they find Claude’s late night rest reminders “thoughtful,” while others have said they’re annoying, given Claude often gets the time wrong, anyway. “It often does it at like 8:30 in the morning. Tells me to go get some rest and we’ll pick back up in the morning,” wrote one user on Reddit. Online speculation abounds on why the chatbot insists users rest, including a theory that it’s an intentional feature to promote users’ wellbeing, or that the Anthropic is trying to save computing power by discouraging prolonged Claude use. The company recently struck a deal with Elon Musk’s SpaceXAI (formerly SpaceX) to add more than 300 gigawatts of compute capacity. Anthropic did not immediately reply to Fortune’s request for comment seeking more information about why Claude may be telling users to go to sleep. Yet, Sam McAllister, a member of the staff at Anthropic, wrote in a post on X that the behavior is a “Bit of a character tic.” “We’re aware of this and hoping to fix it in future models,” he added in the same post. Experts tell Fortune that Claude’s insistence on sleep is potentially rooted in its training data. Rather than being “thoughtful,” as some described it, Jan Liphardt, a Stanford bioengineering professor said the large language model may merely be repeating a phrase used in its training data in similar situations. “It doesn’t mean that the front