{"id":2856,"date":"2026-04-01T05:33:12","date_gmt":"2026-04-01T05:33:12","guid":{"rendered":"https:\/\/stock999.top\/?p=2856"},"modified":"2026-04-01T05:33:12","modified_gmt":"2026-04-01T05:33:12","slug":"anthropic-leaks-its-own-ai-coding-tools-source-code-in-second-major-security-breach","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=2856","title":{"rendered":"Anthropic leaks its own AI coding tool\u2019s source code in second major security breach"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2154161015-2-e1774978835361.jpg?w=2048\" \/><\/p>\n<p>Anthropic has accidentally leaked the source code for its popular coding tool Claude Code.\u00a0<\/p>\n<p>The leak comes just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both \u201cMythos\u201d and \u201cCapybara,\u201d according to the leaked blog post obtained by Fortune.<\/p>\n<p>The source code leak exposed around 500,000 lines of code across roughly 1,900 files. When reached for comment, Anthropic confirmed that \u201csome internal source code\u201d had been leaked within a \u201cClaude Code release.\u201d\u00a0<\/p>\n<p>A spokesperson said: \u201cNo sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We\u2019re rolling out measures to prevent this from happening again.\u201d<\/p>\n<p>The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company\u2019s draft blog post about its forthcoming model. While the latest security lapse did not expose the weights of the Claude model itself, it did allow people with technical knowledge to extract additional internal information from the company\u2019s codebase, according to a cybersecurity professional Fortune asked to review the leak.\u00a0<\/p>\n<p>Claude Code is perhaps Anthropic\u2019s most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code\u2019s capabilities come not from the underlying large language model that powers the product but from the software \u201charness\u201d that sits around the underlying AI model and instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior. It is the source code for this agentic harness that has now leaked online.<\/p>\n<p>The leak potentially allows a competitor to reverse-engineer how Claude Code\u2019s agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code\u2019s agentic harness based on the leaked code.<\/p>\n<p>The leaked code also provided further evidence that Anthropic has a new model with the internal name Capybara that the company is actively preparing to launch, according to Roy Paz, a senior AI security researcher at LayerX Security. Paz said it is likely that the company may release a \u201cfast\u201d and \u201cslow\u201d version of the new model, based on the model\u2019s apparently larger context window, and that it will be the most advanced model on the market.<\/p>\n<p>Currently, Anthropic markets each of its models in three different sizes. The largest and most capable model versions are branded Opus; slightly faster and cheaper, but less capable, versions are branded Sonnet; and the smallest, cheapest, and fastest are called Haiku. In the draft blog post obtained by Fortune last week, Anthropic describes Capybara as a new tier of model that is even larger and more capable than Opus, but also more expensive.<\/p>\n<p>The newest leak, first made public in an X post, appears to have happened after Anthropic uploaded all of Claude Code\u2019s original code to NPM, a platform developers use to share and update software, instead of only the finished version that computers actually run. The mistake looks like a \u201chuman error\u201d after someone took a shortcut that bypassed normal release safeguards, Paz said.\u00a0 Anthropic told Fortune that normal release safeguards were not bypassed.<\/p>\n<p>\u201cUsually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open,\u201d he told Fortune. \u201cAt Anthropic, it seems that the process wasn\u2019t in place and a single misconfiguration or misclick suddenly exposed the full source code.\u201d<\/p>\n<p>Paz also raised questions about how the tool could potentially connect to Anthropic\u2019s internal systems. He said the greater concern may not be direct access to backend models, but rather that the leaked code could reveal non-public details about how the systems work, such as internal APIs and processes. He added that this kind of information could potentially help sophisticated actors better understand the architecture of Anthropic\u2019s models and how they are deployed, which in turn could inform attempts to work around existing safeguards.<\/p>\n<p>Anthropic\u2019s current most powerful model, Claude 4.6 Opus, is already classed by the company as a dangerous model when it comes to cybersecurity risks. Anthropic has said its current Opus models are capable of autonomously identifying zero-day vulnerabilities in software. While these capabilities are intended to help companies detect and fix flaws, they could also be weaponized by hackers, including nation-states, to find and exploit vulnerabilities.<\/p>\n<p>This isn\u2019t the first time Anthropic has inadvertently leaked details about its popular Claude Code tool. In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic\u2019s internal systems. Anthropic later removed the software and took the public code down.<\/p>\n<p>EDITOR\u2019S NOTE: This article was updated to include additional comment from Anthropic and clarifications of some technical details by one of the sources.<\/p>\n<p>#Anthropic #leaks #coding #tools #source #code #major #security #breach<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anthropic has accidentally leaked the source code for its popular coding tool Claude Code.\u00a0 The&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[353,6614,3907,926,3970,6612,2313,582,6613,317,2774],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2856"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2856"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2856\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2856"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2856"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2856"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}