{"id":6335,"date":"2026-05-14T22:10:24","date_gmt":"2026-05-14T22:10:24","guid":{"rendered":"https:\/\/stock999.top\/?p=6335"},"modified":"2026-05-14T22:10:24","modified_gmt":"2026-05-14T22:10:24","slug":"claude-is-telling-users-to-go-to-sleep-mid-session-users-are-annoyed-but-anthropic-says-its-a-tic","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=6335","title":{"rendered":"Claude is telling users to go to sleep mid-session. Users are annoyed but Anthropic says it&#8217;s a tic"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/05\/GettyImages-2261854833-1-e1778792716589.jpg?w=2048\" \/><\/p>\n<p>Anthropic\u2019s Claude is telling people to go to sleep and users can\u2019t figure out why.<\/p>\n<p>A quick scan of Reddit reveals that hundreds of people have had the same issue dating back months\u2014and as recently as Wednesday. Claude\u2019s sleep demands are varied and, often, quirky variations of the same message.<\/p>\n<p>To one user it may write a simple \u201cget some rest,\u201d yet for others its messages are more personalized and empathetic. Oftentimes, Claude will repeat the message multiple times.<\/p>\n<p>\u201cNow go to sleep again. Again. For the THIRD time tonight\u2026\u201d it replied to a person with the Reddit username, angie_akhila.<\/p>\n<p>Some users have said they find Claude\u2019s late night rest reminders \u201cthoughtful,\u201d while others have said they\u2019re annoying, given Claude often gets the time wrong, anyway.\u00a0<\/p>\n<p>\u201cIt often does it at like 8:30 in the morning. Tells me to go get some rest and we\u2019ll pick back up in the morning,\u201d wrote one user on Reddit.\u00a0<\/p>\n<p>Online speculation abounds on why the chatbot insists users rest, including a theory that it\u2019s an intentional feature to promote users\u2019 wellbeing, or that the Anthropic is trying to save computing power by discouraging prolonged Claude use. The company recently struck a deal with Elon Musk\u2019s SpaceXAI (formerly SpaceX) to add more than 300 gigawatts of compute capacity.<\/p>\n<p>Anthropic did not immediately reply to Fortune\u2019s request for comment seeking more information about why Claude may be telling users to go to sleep. Yet, Sam McAllister, a member of the staff at Anthropic, wrote in a post on X that the behavior is a \u201cBit of a character tic.\u201d\u00a0<\/p>\n<p>\u201cWe\u2019re aware of this and hoping to fix it in future models,\u201d he added in the same post.<\/p>\n<p>Experts tell Fortune that Claude\u2019s insistence on sleep is potentially rooted in its training data. Rather than being \u201cthoughtful,\u201d as some described it, Jan Liphardt, a Stanford bioengineering professor said the large language model may merely be repeating a phrase used in its training data in similar situations.\u00a0<\/p>\n<p>\u201cIt doesn\u2019t mean that the frontier model has suddenly become sentient,\u201d said Liphardt, who is also the CEO of OpenMind, which builds software for AI-connected robots. \u201cIt doesn\u2019t mean that this model has now come alive. It\u2019s reflecting that it\u2019s read 25,000 books on humans\u2019 need [for] sleep, and humans sleep at night.\u201d<\/p>\n<p>Leo Derikiants, the co-founder and CEO of Mind Simulation Lab, an independent AI research lab trying to achieve artificial general intelligence (AGI), told Fortune that Claude\u2019s rest reminders may be influenced by a system prompt acting behind the scenes. These system prompts are like hidden instructions that help guide an LLMs behavior and sets boundaries.\u00a0<\/p>\n<p>One company which publishes their system prompts publicly is Grok-creator xAI, now a part of SpaceXAI. Grok\u2019s instructions on Github, for instance, list several safety considerations including not assisting users asking about violent crimes. Yet, because of Musk\u2019s branding of Grok as \u201cbrutally honest,\u201d Grok 4\u2019s system prompt also encourages it to, in certain cases, ignore restrictions imposed by users and \u201cpursue a truth-seeking, non-partisan viewpoint.\u201d<\/p>\n<p>It\u2019s also possible that Claude is seizing upon the \u201cgo to sleep\u201d language as a way of managing larger context windows, Derikiants said. LLMs like Claude, can only reference a limited amount of information at once. When the context window is nearly full, that may encourage the LLM to introduce wrap-up phrases such as \u201cgood night.\u201d\u00a0The definitive reason, though, requires further research by Anthropic, he added.<\/p>\n<p>Despite the seemingly logical explanations that may explain the behavior, users could be forgiven for seeing the response as evidence of some leap in intelligence on the part of LLMs. The pace of innovation in the AI race has led to increasingly frequent updates and new model releases.<\/p>\n<p>Just in the past month, OpenAI has released GPT 5.5, which OpenAI president Greg Brockman called an advancement \u201ctowards more agentic and intuitive computing.\u201d Meanwhile, Anthropic released Opus 4.7 publicly last month while it held its most capable model, Mythos, back from public release because it said it was too dangerous.<\/p>\n<p>Liphardt said AI is advancing so rapidly it is increasingly common for people to assign human characteristics to AI. As these systems get better at mimicking empathy or concern, he warned, it becomes easier for users to forget they are interacting with pattern-recognition engines.\u00a0<\/p>\n<p>\u201cI\u2019m continuously surprised by how quickly people, when they interact with a frontier model, project life into it and develop strong connection.\u201d<\/p>\n<p>#Claude #telling #users #sleep #midsession #Users #annoyed #Anthropic #tic<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anthropic\u2019s Claude is telling people to go to sleep and users can\u2019t figure out why&#8230;.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[12067,353,8374,436,4683,12066,406,6103,439,1478,317,8537,1977,12068,5224],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6335"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6335"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/6335\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6335"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6335"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6335"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}