{"id":3632,"date":"2026-04-11T13:13:09","date_gmt":"2026-04-11T13:13:09","guid":{"rendered":"https:\/\/stock999.top\/?p=3632"},"modified":"2026-04-11T13:13:09","modified_gmt":"2026-04-11T13:13:09","slug":"these-niche-ai-startups-are-trying-to-protect-the-pentagons-secrets","status":"publish","type":"post","link":"https:\/\/stock999.top\/?p=3632","title":{"rendered":"These niche AI startups are trying to protect the Pentagon\u2019s secrets"},"content":{"rendered":"<p><img src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/04\/GettyImages-1359935689-e1775771208554.jpg?w=2048\" \/><\/p>\n<p>The relationship between AI companies and the American defense establishment burst into the open earlier this year when Anthropic found itself in a nasty public fight with the Pentagon. After Anthropic demanded assurances its AI products wouldn\u2019t power domestic surveillance or autonomous weapons, the Pentagon barred all federal agencies and contractors from doing business with Anthropic at all; the company sued to lift the ban, and the high-stakes battle is currently unfolding in court.\u00a0<\/p>\n<p>But behind the scenes, an equally important if less dramatic AI struggle is playing out\u2014as U.S. defense and intelligence agencies try to leverage the technology without sacrificing their need for secrecy. A small handful of AI infrastructure companies have been quietly doing complex, rarely-seen work that makes it possible for the U.S. government to securely use AI in the first place.<\/p>\n<p>\u201cIt\u2019s probably a $2 billion market right now,\u201d says Nicolas Chaillan, founder of an AI platform called Ask Sage that\u2019s used by thousands of teams across the Department of Defense. The opportunity these pick-and-shovel companies are chasing grows out of an extreme case of a dilemma faced by anyone looking to deploy off-the-shelf LLMs on confidential data: They\u2019re trying to figure out how to use these powerful tools without inadvertently exposing the wrong information to the wrong people through the AI training process.<\/p>\n<p>These AI infrastructure companies receive less media attention for their government work than bigger peers like Google, xAI, OpenAI, and of course Anthropic. Until the recent dispute broke out, Anthropic\u2019s Claude model was among the only LLMs approved for use on the Defense Department\u2019s classified networks. But this arrangement was made possible by a 2024 deal with two other firms that provided the necessary infrastructure\u2014Palantir and Amazon Web Services (AWS)\u2014which operated the secure software platforms and cloud services that host the AI. Imagine that large language models are a bit like the U.S. military\u2019s newest, shiniest warplane: The infrastructure companies provide something like the radios and runways that help these new machines talk to the rest of the military, and land safely.<\/p>\n<p>\u201cThere\u2019s probably, I don\u2019t know, a hundred people, 200 people who deeply care about this question inside the intelligence community,\u201d says Emily Harding, a former CIA analyst who now researches defense tech at the Center for Strategic and International Studies. \u201cI think there\u2019s millions and millions of business people who are going to face this same problem, not with as high stakes.\u201d<\/p>\n<p>Any corporate leader sitting on a trove of proprietary information has probably run into some version of this issue with their AI strategy. Imagine training a bespoke instance of ChatGPT or Claude on all of your company\u2019s mission-critical files: A law firm\u2019s case documents; a drug company\u2019s internal research reports; a retailer\u2019s real-time supply chain data; an investment bank\u2019s risk models or due diligence memos. Trained on such a corpus, an AI helper could speak your company\u2019s language fluently, and reveal richly profitable connections in your files. But consider the consequences if the wrong person\u2014say, a competitor\u2014got access to that helper.\u00a0<\/p>\n<p>\u201cIt\u2019s kind of a Catch-22,\u201d Harding tells Fortune. \u201cFeed it enough, it knows too much. You don\u2019t feed it enough and then it can\u2019t do its job.\u201d<\/p>\n<p>With the right prompting from an outside party, the contents of any confidential file that the AI touched in training could be spilled. Which means teaching an LLM all a company\u2019s secrets could simultaneously boost the business\u2014and risk blowing it up.\u00a0<\/p>\n<p>When secrets are a matter of national security<\/p>\n<p>Now consider how much worse that problem becomes if that AI helper works for the CIA, where secrecy is a matter of national security and breaches could endanger lives.\u00a0<\/p>\n<p>Intelligence agencies and the military depend on the compartmentalization of sensitive information. Human agents and analysts gain access to secrets on a strict, need-to-know basis to reduce the risk of leaks. (This may be among the reasons that a recent report stating the Pentagon was discussing training LLMs on secret data sparked immediate criticism.) So what happens if every analyst\u2019s AI assistant suddenly knows all of an agency\u2019s secrets?<\/p>\n<p>\u201cCompartmentalization goes out the window,\u201d says Brian Raymond, another former CIA analyst who\u2019s now CEO of Unstructured, an AI infrastructure company that serves both commercial and government clients.\u00a0<\/p>\n<p>\u00a0\u201cLet\u2019s say I\u2019m an Iraq analyst,\u201d\u00a0 Raymond explains, by way of example. \u201cFrom an intel organization\u2019s perspective, I have no business reading reports from covert assets on Chinese military technology. Everyone stays in their swim lane and that\u2019s great security. If all of a sudden, I could start asking all sorts of questions like, \u2018Tell me all the assets we have in some county in Asia and tell me all their real names\u2019\u2014those are our most closely guarded secrets!\u201d<\/p>\n<p>And so a small crop of AI infrastructure firms has sprung up to solve what amounts to AI\u2019s secrecy problem. These companies build a scaffolding of software and services around commercial large language models, which allow organizations to use the AI without exposing their secrets.\u00a0<\/p>\n<p>At the heart of this scaffolding is a carefully orchestrated version of technique called Retrieval Augmented Generation, or RAG. Commercial LLMs use a version of RAG whenever they look at documents you upload into the chat window. A model like Claude retrieves information from that document and then augments its responses based on its findings before generating an answer to your questions. Still, there\u2019s often a limit to how much data you can upload. And giving a commercial LLM sensitive documents remains risky because the contents could end up being used for future training, or end up in a temporary cache that isn\u2019t necessarily siloed from the provider\u2019s view.\u00a0\u00a0<\/p>\n<p>The companies working with the U.S. government offer far more secure, managed RAG systems, in which commercial LLMs function more like a processing engine\u2014and sensitive information stays walled off in secure libraries. These systems can be used to separate what a commercial AI model like Claude or ChatGPT \u201cknows\u201d from what it looks up.\u00a0\u00a0<\/p>\n<p>The AI equivalent of a \u2018secure room\u2019<\/p>\n<p>Let\u2019s say the Iraq analyst from Raymond\u2019s example employs a secure, RAG-based AI assistant to put together a report on U.S. Navy assets in the Persian Gulf. The analyst types a question into this assistant\u2019s chat window, asking for the latest count of warships there. The RAG system she\u2019s using employs a private, secure library that, let\u2019s say, contains some recent, classified intelligence reports about Navy deployments in the region. This library\u2014technically a vector database, mathematically indexed for connected meanings rather than just keywords\u2014is the first place the system looks for an answer.\u00a0<\/p>\n<p>Think of this as the step where the AI assistant steps into a secure room to get briefed on a need-to-know basis. The assistant retrieves these classified details about U.S. ships and then hands them over to a commercial LLM like Gemini that\u2019s running on secure servers. The LLM then uses the classified details to augment its response before generating it in the text window for the analyst. Secure systems like these are often set to expunge questions and answers from their memory once a session is done, so classified information is neither used for later training nor retained in any memory.<\/p>\n<p>The Iraq analyst in this example would only have clearance to access a secure library of documents related to her tasks in Iraq. Out-of-scope questions about China, from Raymond\u2019s example, wouldn\u2019t be answerable. There\u2019d be no classified China documents in the secure library, nor would the commercial LLM have any of that information in its training data. In short, this method creates a scaffolding that gives the AI a way to read and use sensitive data without remembering it forever or revealing it to the wrong people.\u00a0\u00a0<\/p>\n<p>Raymond\u2019s company, Unstructured, works at the scaffolding\u2019s base. His team cleans and converts messy internal files\u2014from handwritten field notes for commercial clients to exotic classified file formats for the government\u2014so they can be searched safely inside a secure vector database. Or as Raymond says, \u201cWe vacuum up all that data in the world, get it into book form, and to the library.\u201d<\/p>\n<p>Other companies like Berkeley-based Arize AI, which has raised more than $130 million of funding since it launched in 2020, work at the center of the structure. Arize tests and monitors RAG pipelines as well as the agents and applications built on them\u2014debugging and hunting down errors and hallucinations.\u00a0\u00a0<\/p>\n<p>\u201cControlling these systems is hard and making sure they do the right thing is one of the most mission-critical parts of the process,\u201d Arize CEO Jason Loepatecki tells Fortune. \u201dI wouldn\u2019t deploy an AI without using one of my products or my competitors\u2019 products.\u201d<\/p>\n<p>At the top of scaffolding you\u2019ll find players like Ask Sage. While Unstructured and Arize serve a relatively even mix of government and commercial clients, Ask Sage is more of a Pentagon specialist, doing around 65% of its business with the Defense Department. The Virginia-based company sells a government-grade software interface where users can safely query approved commercial LLMs, run agents, and get answers drawn from their own restricted data, all without the model ever \u201clearning\u201d the secrets behind the scenes.\u00a0<\/p>\n<p>A Pentagon in-house competitor?<\/p>\n<p>In December the Defense Department announced the launch of its own internal LLM platform, called GenAI.mil. Defense Secretary Pete Hegseth introduced the rollout by way of a department-wide message that said, \u201cI expect every member of the department to login, learn it, and incorporate it into your workflows immediately.\u201d Afterward, Pentagon officials said, more than a million unique users signed on to the platform.\u00a0<\/p>\n<p>At present, GenAI.mil offers a simple chatbot interface, allowing service members to employ a commercial LLM running on secure servers for drafting documents or analyzing files\u2014but only for work that is unclassified.\u00a0 This is among the reasons that GenAI.mil\u2014unlike products from Ask Sage, Palantir or Scale AI\u2014can\u2019t do RAG on secure off-platform databases full of top-secret files. A Pentagon official told Fortune that the department is looking to deploy AI tools across \u201call classification levels\u201d moving forward, but declined to answer questions about timeline, specific software architecture or upcoming changes to the GenAI.mil platform.\u00a0 In its current form at least, the Pentagon\u2019s new product can\u2019t solve AI\u2019s secrecy problem.\u00a0<\/p>\n<p>Which is perhaps good news for products like Ask Sage. While Chaillan says new government subscriptions have leveled off since January, 14,000 teams across 27 U.S. government agencies remain subscribed to Ask Sage. On the strength of those numbers, Ask Sage was acquired in November by the defense-focused analytics company BigBear.ai in a $250 million deal. (Chaillan left the company in February.)<\/p>\n<p>Raymond, of Unstructured, sees the Pentagon\u2019s new platform as an opportunity. \u201cWith GenAI.mil making these models more available, that\u2019s going to unlock a lot of demand for what we build,\u201d he said.<\/p>\n<p>Knowledge workers in the U.S. military and intelligence communities have reams of documents to summarize, tons of text to draft, and endless compliance tasks to carry out, all buried under a dense thicket of government acronyms. \u201cTake an ATO in the government with FedRAMP, or you know, pick your poison of compliance nightmare,\u201d Chaillan says. For such tasks, he adds, a platform like AskSage \u201creally drastically reduces the human manual burden.\u201d\u00a0<\/p>\n<p>And this is likely one of many reasons why leaders like Arize\u2019s Loepatecki see a huge opportunity solving AI\u2019s secrecy problem both inside the government and out.\u00a0\u00a0<\/p>\n<p>\u201cThe vertical we\u2019re in is probably one of the fastest growing picks-and-shovels spaces,\u201d Loepatecki says. \u201cThe world\u2019s data is infinite, and the pockets of data that you don\u2019t want to be trained publicly are large.\u201d<\/p>\n<p>#niche #startups #protect #Pentagons #secrets<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The relationship between AI companies and the American defense establishment burst into the open earlier&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[245],"tags":[353,863,1175,859,8078,356,5755,108,8079,1301,443],"_links":{"self":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/3632"}],"collection":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3632"}],"version-history":[{"count":0,"href":"https:\/\/stock999.top\/index.php?rest_route=\/wp\/v2\/posts\/3632\/revisions"}],"wp:attachment":[{"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3632"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3632"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stock999.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3632"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}