Palantir CEO Alex Karp states there was "never any expectation" that AI solutions would be utilized for domestic surveillance amid the Anthropic-DoD dispute
Palantir’s Role in the Anthropic and Pentagon AI Dispute
Amid the ongoing conflict between Anthropic, a Silicon Valley AI firm, and the U.S. Department of Defense regarding the military’s use of Anthropic’s large language models, another major player has emerged: Palantir.
Based in Miami, Palantir is a prominent provider of data analytics and artificial intelligence solutions for the Department of Defense. In fact, Palantir serves as the primary conduit through which the Pentagon has accessed Anthropic’s language model, Claude.
“We’re still right in the thick of this,” CEO Alex Karp told Fortune during the company’s biannual AIP conference. “Our technology powers the large language models.”
Karp shared that he has been involved in numerous conversations with all relevant parties, though he declined to provide details, emphasizing his desire to avoid disclosing private discussions or criticizing others.
He was adamant about one point: The Department of Defense is not employing artificial intelligence for widespread surveillance of Americans, nor does he believe there are any plans to do so.
“Without revealing internal conversations, there has never been any indication that these tools would be used domestically,” Karp explained. “The Department of Defense is not considering domestic deployment of these products. Their focus is entirely on non-U.S. citizens in military contexts.”
Palantir’s extensive work with the U.S. government includes a partnership with Anthropic, established in 2024, to deliver Anthropic’s AI capabilities to the Pentagon. Additionally, Anthropic began collaborating directly with the Department of Defense last year to develop a customized version of its technology for military use.
The dispute between Anthropic and the Pentagon has persisted since January, with both sides disagreeing on its origins. According to statements made by Undersecretary of Defense for Research and Engineering Emil Michael last week, Palantir informed the Pentagon that Anthropic was seeking information about whether its models had been used in a U.S. military operation targeting Venezuelan President Nicolás Maduro. Anthropic has denied this, stating it has not discussed the use of Claude for specific missions with any industry partners, including Palantir, outside of routine technical exchanges. The disagreement has since escalated into a debate over whether Anthropic can impose contractual restrictions on how its AI models are utilized.
Anthropic’s CEO, Dario Amodei, has addressed the issue in several blog posts, including an initial statement in late February, where he claimed the Pentagon refused to agree to safeguards preventing the use of its language models for domestic surveillance or fully autonomous weapons. Subsequently, Secretary of Defense Pete Hegseth labeled Anthropic a “supply-chain risk,” jeopardizing many of the company’s business relationships and prompting Anthropic to file a lawsuit against the Pentagon over this designation.
Palantir’s Stance on Domestic Use and Privacy
Palantir, which received early investment from the CIA’s venture capital arm and whose software has been deployed in international counter-terrorism operations, has long faced criticism for allegedly enabling government surveillance of civilians and domestic suspects. For over ten years, Karp has consistently rejected these accusations and has advocated for strong technical safeguards to prevent misuse of technology for domestic monitoring.
To address these concerns, Palantir established a “Privacy and Civil Liberties” team early in its history. This interdisciplinary group—comprising engineers, legal experts, philosophers, and social scientists—was tasked with embedding privacy protections into Palantir’s products and promoting a culture of ethical responsibility. The team also created internal reporting mechanisms, such as an ethics hotline, allowing employees to raise concerns about potentially unethical projects.
Despite these measures, civil liberties organizations continue to accuse Palantir of facilitating government surveillance. The company’s partnership with U.S. Immigration and Customs Enforcement (ICE), which began during the Obama administration, has drawn significant criticism from both external advocates and Palantir employees. This scrutiny has intensified in recent years as ICE has ramped up enforcement actions in cities like Minneapolis.
In his conversation with Fortune, Karp expressed strong support for limiting the use of AI by domestic agencies, stating he is “very sympathetic” to arguments against deploying such technologies within the United States and is “totally in favor” of establishing clear boundaries for domestic applications.
“Honestly, I believe we should set these limits ourselves,” Karp said, suggesting that Silicon Valley companies should form a consortium to define what is and isn’t acceptable regarding domestic use of AI.
However, Karp made a clear distinction between setting restrictions for domestic agencies and for the Department of Defense, which primarily operates in the context of international relations and national security.
“The current discussion is about using these tools against those who threaten our service members,” Karp noted, adding that he personally supports granting the Department of Defense broad authority to utilize these technologies.
He further explained, “If we could be certain that countries like China, Russia, and Iran wouldn’t develop similar tools, I would advocate for stringent legal restrictions. But since adversaries will inevitably build and deploy these systems, I believe the Department of Defense should have wide latitude in their use.”
This article was originally published on Fortune.com.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
UXLINK fluctuated by 42.7% in 24 hours: Volatile low liquidity market with no clear event-driven factors
Seres Therapeutics: Q2 2026 SER-155 Data May Determine Survival or Dilution
NAORIS (NAORIS) fluctuates 41.1% in 24 hours: Surge in futures trading volume drives price rebound
