U.S. Tightens AI Contract Guidelines Amid Pentagon-Anthropic Conflict

The Trump administration has established stringent rules for civilian artificial intelligence contracts, demanding unrestricted legal use of models by the government. This comes during a dispute between the Pentagon and AI firm Anthropic, which was labeled a 'supply-chain risk' by the Department of Defense.

U.S. Tightens AI Contract Guidelines Amid Pentagon-Anthropic Conflict

The Trump administration has introduced firm regulations for civilian AI contracts, requiring companies to allow any lawful use of their models, according to a Financial Times report. The move highlights ongoing tensions between the Pentagon and AI firm Anthropic, recently classified as a 'supply-chain risk' by defense authorities.

The conflict stems from a disagreement over Anthropic’s safety measures, which the Department of Defense argues excessively limit government use. A reviewed draft of the guidelines specifies that AI groups must grant the U.S. government an irrevocable license for all legal purposes if seeking government contracts.

This guidance is part of a wider initiative to bolster AI services procurement across the government, mirroring potential Pentagon measures for its military contracts. As a result, the GSA has terminated Anthropic’s OneGov agreement, affecting its availability to federal branches.

TRENDING

OPINION / BLOG / INTERVIEW

Teachers must adapt as AI and rapid change transform classrooms, OECD warns

Digital Scams Surge Globally, Threatening Trust in the Expanding Digital Economy

Education Rise and Gender Imbalance Are Redrawing China’s Marriage Landscape

IMF Study Urges Serbia to Track Hidden Costs of Tax Breaks and Improve Transparency

DevShots

Latest News

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback