US Sets Strict AI Contract Rules Amid Pentagon-Anthropic Dispute

The Trump administration has proposed new guidelines for civilian AI contracts, requiring companies to allow legal use of their models. This comes amid a conflict between the Pentagon and AI firm Anthropic, which has been designated a supply-chain risk, leading to its technology being banned for military use.

US Sets Strict AI Contract Rules Amid Pentagon-Anthropic Dispute

The Trump administration has introduced stringent rules for civilian artificial intelligence contracts, requiring any lawful use of AI models. This decision follows a dispute between the Pentagon and Anthropic, which has resulted in the firm's technology being prohibited in military applications, as reported by the Financial Times.

According to the report, the Pentagon formally designated Anthropic a 'supply-chain risk,' barring the use of its AI technology in governmental military contracts. This development emerged after a prolonged disagreement regarding Anthropic's insisted safeguards, which the Defense Department criticized as excessively restrictive.

The guidelines, reviewed by the Financial Times, demand AI firms to grant the U.S. an irrevocable license for all legal purposes. They prohibit contractors from embedding partisan or ideological biases in AI data outputs, and require disclosure on whether models comply with non-U.S. regulations, as part of a broader initiative to enhance AI service procurement.

TRENDING

OPINION / BLOG / INTERVIEW

Teachers must adapt as AI and rapid change transform classrooms, OECD warns

Digital Scams Surge Globally, Threatening Trust in the Expanding Digital Economy

Education Rise and Gender Imbalance Are Redrawing China’s Marriage Landscape

IMF Study Urges Serbia to Track Hidden Costs of Tax Breaks and Improve Transparency

DevShots

Latest News

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback