{"id":822,"date":"2026-03-08T04:26:09","date_gmt":"2026-03-08T04:26:09","guid":{"rendered":"https:\/\/stock999.top\/?p=822"},"modified":"2026-03-08T04:26:09","modified_gmt":"2026-03-08T04:26:09","slug":"pentagon-official-recalls-whoa-moment-when-defense-leaders-realized-how-much-they-need-anthropic","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=822","title":{"rendered":"Pentagon official recalls &#8216;whoa moment&#8217; when defense leaders realized how much they need Anthropic"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/03\/GettyImages-2206545658-e1772909904543.jpg?w=2048\" \/><\/p>\n<p>The Defense Department\u2019s reliance on Anthropic\u2019s AI came as a shocking realization that ultimately led to their dramatic schism, according to a top Pentagon official.<\/p>\n<p>Emil Michael, the department\u2019s under secretary for research and engineering as well as its chief technology officer, detailed the events leading up to the public feud in a Friday episode of the All-In podcast.<\/p>\n<p>After the U.S. military\u2019s raid on Venezuela in early January that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic has characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.<\/p>\n<p>\u201cI\u2019m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?\u201d Michael recalled. \u201cSo I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we\u2019re potentially so dependent on a software provider without another alternative.\u201d<\/p>\n<p>Until recently, Anthropic\u2019s Claude was the only AI model authorized in classified settings. The San Francisco-based startup has said it\u2019s patriotic and seeks to defend the U.S., but won\u2019t allow its AI to be used in mass domestic surveillance or autonomous weapons. <\/p>\n<p>The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits from the company that would go beyond those constraints. <\/p>\n<p>After failing to reach a compromise last week, President Donald Trump ordered the federal government to stop using Anthropic while giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also designated the company a supply-chain risk, meaning contractors can\u2019t use it for military work.<\/p>\n<p>For now, the military continues to use Anthropic during the U.S. war on Iran, as AI helps warfighters identify potential targets at a rapid pace.<\/p>\n<p>During his podcast appearance, Michael raised the concern that a rogue developer could \u201cpoison the model\u201d to render it ineffective for the military, train it to hallucinate purposefully, or instruct it to not follow instructions.<\/p>\n<p>He then contacted OpenAI, which eventually reached a similar deal that Anthropic had. Elon Musk\u2019s xAI was also brought into the classified fold, while the Pentagon is trying to get Google\u2019s AI allowed into classified settings too.<\/p>\n<p>\u201cI\u2019m not biased,\u201d Michael said. \u201cI just I want all of them. I want to give them all the same exact terms because I need redundancy.\u201d<\/p>\n<p>He acknowledged that Anthropic had become \u201cdeeply embedded\u201d in the department while other AI companies hadn\u2019t pursued enterprise customers as aggressively by providing forward-deployed engineers.\u00a0<\/p>\n<p>The falling-out between the Pentagon and Anthropic highlighted the clash of cultures between the defense establishment and Silicon Valley, which has its roots in military innovations but has since turned squeamish about seeing its technology used for war.<\/p>\n<p>In fact, a top robotics engineer at OpenAI announced her resignation from the company on Saturday, citing the same concerns Anthropic raised.<\/p>\n<p>\u201cThis wasn\u2019t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,\u201d Caitlin Kalinowski posted\u00a0on\u00a0X\u00a0and\u00a0LinkedIn.\u00a0<\/p>\n<p>#Pentagon #official #recalls #whoa #moment #defense #leaders #realized #Anthropic<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Defense Department\u2019s reliance on Anthropic\u2019s AI came as a shocking realization that ultimately led&#8230;<\/p>\n","protected":false},"author":1,"featured_media":823,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[858,353,857,863,354,759,859,862,860,356,864,326,694,861],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/822"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=822"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/822\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/media\/823"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=822"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=822"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=822"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}