{"id":2451,"date":"2026-03-27T02:54:30","date_gmt":"2026-03-27T02:54:30","guid":{"rendered":"https:\/\/stock999.top\/?p=2451"},"modified":"2026-03-27T02:54:30","modified_gmt":"2026-03-27T02:54:30","slug":"exclusive-anthropic-mythos-ai-model-representing-step-change-in-power-revealed-in-data-leak","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=2451","title":{"rendered":"Exclusive: Anthropic &#8216;Mythos&#8217; AI model representing &#8216;step change&#8217; in power revealed in data leak"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2261589651.jpg?w=2048\" \/><\/p>\n<p>AI company Anthropic is developing and has begun testing with early access customers a new AI model more capable than any it has released previously, the company said, following a data leak that revealed the model\u2019s existence.\u00a0<\/p>\n<p>An Anthropic spokesperson said the new model represented \u201ca step change\u201d in AI performance and was \u201cthe most capable we\u2019ve built to date.\u201d The company said the model is currently being trialed by \u201cearly access customers.\u201d<\/p>\n<p>Descriptions of the model were inadvertently stored in a publicly-accessible data cache and were reviewed by Fortune.<\/p>\n<p>A draft blog post that was available in an unsecured and publicly-searchable data store prior to Thursday evening said the new model is called \u201cClaude Mythos\u201d and that the company believes it poses unprecedented cybersecurity risks.<\/p>\n<p>The same cache of unsecured, publicly discoverable documents revealed details of a planned, invite-only CEO summit in Europe that is part of the company\u2019s drive to sell its AI models to large corporate customers.\u00a0<\/p>\n<p>The AI lab left the material, including what appeared to be a draft blog post announcing a new model, in an unsecured, public data lake, according to documents separately located and reviewed by Roy Paz, a senior AI security researcher at LayerX Security, a computer and network security company, and Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge.\u00a0<\/p>\n<p>In total, there appeared to be close to 3,000 assets linked to Anthropic\u2019s blog that had not been published previously on the company\u2019s news or research sites that were nonetheless publicly-accessible in this data cache, according to Pauwels, who Fortune asked to assess and review the material.<\/p>\n<p>After being informed of the data leak by Fortune on Thursday, Anthropic removed the public\u2019s ability to search the data store and retrieve documents from it.<\/p>\n<p>In a statement provided to Fortune, Anthropic acknowledged that a \u201chuman error\u201d in the configuration of its content management system led the draft blog post to being accessible. It described the unpublished material that was left in an unsecured and publicly-searchable data store as \u201cearly drafts of content considered for publication.\u201d<\/p>\n<p>As well as referring to Mythos, the draft blog post also discussed a new tier of AI models that it says will be called \u201cCapybara\u201d. In the document, Anthropic says: \u201c\u2019Capybara\u2019 is a new name for a new tier of model: larger and more intelligent than our Opus models\u2014which were, until now, our most powerful.\u201d Capybara and Mythos appear to refer to the same underlying model.<\/p>\n<p>Currently, Anthropic markets each of its models in three different sizes: the largest and most capable model versions are branded Opus, while a slightly faster and cheaper, but less capable, versions are branded Sonnet, and the smallest, cheapest, and fastest are called Haiku. However, in the blog post, Anthropic describes Capybara as a new tier of model that is even larger and more capable than Opus, but also more expensive. <\/p>\n<p>\u201cCompared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others,\u201d the company said in the blog.<\/p>\n<p>The document also said the company had completed training \u201cClaude Mythos,\u201d which the draft blog post described as \u201cby far the most powerful AI model we\u2019ve ever developed.\u201d <\/p>\n<p>In response to questions about the draft blog post, the company acknowledged training and testing a new model. \u201cWe\u2019re developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity,\u201d an Anthropic spokesperson said. \u201cGiven the strength of its capabilities, we\u2019re being deliberate about how we release it. As is standard practice across the industry, we\u2019re working with a small group of early access customers to test the model. We consider this model a step change and the most capable we\u2019ve built to date.\u201d<\/p>\n<p>The document Fortune and the cybersecurity experts reviewed consists of structured data for a webpage, complete with headings and a publication date, suggesting it forms part of a planned product launch. It outlines a cautious rollout strategy for the model, beginning with a small group of early-access users. The draft blog notes that the model is expensive to run and not yet ready for general release.<\/p>\n<p>Significant new cybersecurity risks<\/p>\n<p>The new AI model poses significant cybersecurity risks, according to the leaked document.\u00a0<\/p>\n<p>\u201cIn preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses\u2014even beyond what we learn in our own testing. In particular, we want to understand the model\u2019s potential near-term risks in the realm of cybersecurity\u2014and share the results to help cyber defenders prepare,\u201d the document said.<\/p>\n<p>Anthropic appears to be especially worried about the model\u2019s cybersecurity implications, noting that the system is \u201ccurrently far ahead of any other AI model in cyber capabilities\u201d and \u201cit presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.\u201d In other words, Anthropic is concerned that hackers could use the model to run large-scale cyberattacks.<\/p>\n<p>The company said in the draft blog that because of this risk, its plan for the model\u2019s release would focus on cyber defenders: \u201cWe\u2019re releasing it in early access to organizations, giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits.\u201d<\/p>\n<p>The latest generation of frontier models from both Anthropic and OpenAI have crossed a threshold that the companies say poses new cybersecurity risks. In February, when OpenAI released GPT-5.3-Codex, the company said it was the first model it had classified as \u201chigh capability\u201d for cybersecurity-related tasks under its Preparedness Framework\u2014and the first it had directly trained to identify software vulnerabilities.\u00a0<\/p>\n<p>Anthropic, meanwhile, navigated similar risks with its Opus 4.6, released the same week. The model demonstrated an ability to surface previously unknown vulnerabilities in production codebases, a capability that the company acknowledged was dual-use, meaning that it could both help hackers as well as help cybersecurity defenders find and close vulnerabilities in code.<\/p>\n<p>The company has also reported that hacking groups, including those linked to the Chinese government, have attempted to exploit Claude in real-world cyberattacks. In one documented case, Anthropic discovered that a Chinese state-sponsored group had already been running a coordinated campaign using Claude Code to infiltrate roughly 30 organizations\u2014including tech companies, financial institutions, and government agencies\u2014before the company detected it. Over the following ten days, Anthropic investigated the full scope of the operation, banned the accounts involved, and notified affected organizations.<\/p>\n<p>An exclusive executive retreat<\/p>\n<p>The leak of not-yet-public information appears to stem from an error on the part of users of the company\u2019s content management system (CMS), which is the software used to publish the company\u2019s public blog, according to cybersecurity professionals.\u00a0<\/p>\n<p>Digital assets created using the content management system are set to public by default and typically assigned a publicly accessible URL when uploaded\u2014unless the user explicitly changes a setting so that these assets are kept private. As a result, a large cache of images, PDF files, and audio files seem to have been published erroneously to an unsecured and publicly-accessible URL via the off-the-shelf content management system.<\/p>\n<p>Anthropic acknowledged in a statement to Fortune that \u201can issue with one of our external CMS tools led to draft content being accessible.\u201d It attributed this issue to \u201chuman error.\u201d\u00a0 <\/p>\n<p>Many of the documents appeared to be discarded or unused assets for past blog posts like images, banners, and logos. However, several appeared to be what were meant to be private or internal documents. For example, one asset has a title that described an employee\u2019s \u201cparental leave.\u201d\u00a0<\/p>\n<p>The documents also included a PDF containing information about an upcoming, invite-only retreat for the CEOs of European companies being held in the U.K., and which Anthropic CEO Dario Amodei will attend. Names of the other attendees are not listed, but are described as Europe\u2019s most influential business leaders. <\/p>\n<p>The two-day retreat is described as an \u201cintimate gathering\u201d to engage in \u201cthoughtful conversation\u201d at an 18th-century manor-turned-hotel-and-spa in the English countryside. The document says that attendees will hear from lawmakers and policymakers about how businesses are adopting AI and experience unreleased Claude capabilities.<\/p>\n<p>An Anthropic spokesperson told Fortune the event \u201cis part of an ongoing series of events we\u2019ve hosted over the past year. We look forward to hosting European business leaders to discuss the future of AI.\u201d<\/p>\n<p>#Exclusive #Anthropic #Mythos #model #representing #step #change #power #revealed #data #leak<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI company Anthropic is developing and has begun testing with early access customers a new&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[353,1626,3970,569,5728,5729,5730,1355,3971,5734,2400,1193,5731,5732,668,5733,190,1188],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2451"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2451"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2451\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2451"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2451"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2451"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}