{"id":2251,"date":"2026-03-25T01:52:22","date_gmt":"2026-03-25T01:52:22","guid":{"rendered":"https:\/\/stock999.top\/?p=2251"},"modified":"2026-03-25T01:52:22","modified_gmt":"2026-03-25T01:52:22","slug":"attempted-corporate-murder-anthropic-and-department-of-war-spar-in-court","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=2251","title":{"rendered":"&#8216;Attempted corporate murder\u2019 \u2014 Anthropic and Department of War spar in court"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2261514586_e47675-e1774398223832.jpg?w=2048\" \/><\/p>\n<p>Lawyers for the Department of War and Anthropic sparred in a California federal court on Tuesday over Anthropic\u2019s challenge to the Pentagon labeling it a supply-chain risk to national security and banning all government contractors from using the company\u2019s sweeping AI tools.\u00a0<\/p>\n<p>The case\u2014which involves a historic first in that the Pentagon, renamed the Department of War (DOW), labeled a U.S.-led\u00a0 business as a supply-chain risk to national security\u2014is rooted in a contract negotiation that escalated quickly. The DOW wanted to add a blanket \u201call lawful use\u201d clause to its contracts with the AI firm so the military could use Anthropic\u2019s Claude tool for any legal purpose. Anthropic balked at the military using Claude for lethal autonomous warfare and mass surveillance of Americans. Anthropic, led by founder Dario Amodei, said it hasn\u2019t thoroughly tested those uses and doesn\u2019t believe they work safely. The DOW claimed those guardrails were unacceptable and that military commanders need latitude to make determinations on missions.\u00a0<\/p>\n<p>On Feb. 27, President Trump posted on Truth Social directing \u201cEVERY\u201d federal agency to \u201cIMMEDIATELY CEASE\u201d all use of Anthropic\u2019s tools. That same day in a post on X, Secretary of War Pete Hegseth labeled Anthropic a \u201csupply-chain risk\u201d and said \u201cno contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.\u201d The risk label is usually reserved for nation states, foreign adversaries, and other threats.\u00a0<\/p>\n<p>Anthropic followed with a lawsuit on March 9, alleging the government \u201cretaliated against it\u201d for expressing its views on safety guardrails and had violated the First Amendment in doing so. It also claimed the government violated the process laid out in the Administrative Procedure Act and the Fifth Amendment\u2019s right to due process. The government said the administration\u2019s actions were in response to Anthropic\u2019s refusal to implement those terms in the contract during the negotiation and argued free speech wasn\u2019t at issue in the case. Deputy Assistant Attorney General Eric Hamilton said the government has unrestricted power to determine which companies it will contract with. Hamilton said Anthropic\u2019s conduct had raised concerns that future software updates could be used as a \u201ckill switch\u201d in military operations.<\/p>\n<p>District Judge Rita F. Lin was skeptical and in her opening statements described the case as a \u201cfascinating public policy debate\u201d over Anthropic\u2019s position versus the government\u2019s military needs, but said her role wasn\u2019t to \u201cdecide who is right in that debate.\u201d<\/p>\n<p>Rather, Lin said the real question to be decided by the court was whether the government \u201cviolated the law\u201d when it went beyond just not using Anthropic\u2019s AI services and finding a more permissible AI vendor to work with.\u00a0<\/p>\n<p>\u201cAfter Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that,\u201d Lin said.\u00a0<\/p>\n<p>The reactions included banning Anthropic from ever having a government contract\u2014excluding other entities like the National Endowment for the Arts from using it to design a website; Hegseth\u2019s directive that anyone who wants to do business with the U.S. military sever their commercial relationship with Anthropic; and, designating Anthropic as a supply-chain risk.\u00a0<\/p>\n<p>\u201cWhat is troubling to me about these reactions is that they don\u2019t really seem to be tailored to the stated national security concern,\u201d said Lin. If the concern is about chain of command, DOW could just stop using Claude and go on its way, she said.\u00a0\u00a0<\/p>\n<p>\u201cOne of the amicus briefs used the term attempted corporate murder,\u201d she added. \u201cI don\u2019t know if it\u2019s murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government\u2019s contracting position in the press.\u201d<\/p>\n<p>The amicus, friend-of-the-court, briefs in the case have drawn a variety of voices including from Microsoft, retired military officers, and engineers and researchers from OpenAI and Google. Nearly all support Anthropic\u2019s position seeking an injunction the supply-chain risk designation.<\/p>\n<p>The brief Lin referred to came from investors and the \u201cFreedom Economy Business Association.\u201d The brief referred to an X post written by Dean Ball, Trump\u2019s former senior policy advisor for AI and emerging tech.\u00a0<\/p>\n<p>\u201cNvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way,\u201d Ball wrote. \u201cThis is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States.\u201d<\/p>\n<p>The American Federation of Government Employees, a union of 800,000 federal workers, said in its amicus brief that the Trump administration had a pattern of using national security concerns as a pretext for retaliation against free speech.<\/p>\n<p>Microsoft wrote that a ban on Anthropic would hurt its own business, and could chill future defense-industry investment and engagement with AI.\u00a0<\/p>\n<p>The Human Rights and Technology Justice Organization brief didn\u2019t take a position who should win in court, but argued against militarized AI broadly, and stating that its use could lead to catastrophic human rights risks.\u00a0\u00a0<\/p>\n<p>Lin said she\u2019ll issue an opinion this week.\u00a0<\/p>\n<p>#Attempted #corporate #murder #Anthropic #Department #War #spar #court<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Lawyers for the Department of War and Anthropic sparred in a California federal court on&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[353,5304,1806,1962,436,1713,518,5305,2197,3870,684],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2251"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2251"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/2251\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2251"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2251"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2251"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}