U.S. Tightens AI Contract Guidelines Amid Pentagon-Anthropic Conflict
The Trump administration has established stringent rules for civilian artificial intelligence contracts, demanding unrestricted legal use of models by the government. This comes during a dispute between the Pentagon and AI firm Anthropic, which was labeled a 'supply-chain risk' by the Department of Defense.
The Trump administration has introduced firm regulations for civilian AI contracts, requiring companies to allow any lawful use of their models, according to a Financial Times report. The move highlights ongoing tensions between the Pentagon and AI firm Anthropic, recently classified as a 'supply-chain risk' by defense authorities.
The conflict stems from a disagreement over Anthropic’s safety measures, which the Department of Defense argues excessively limit government use. A reviewed draft of the guidelines specifies that AI groups must grant the U.S. government an irrevocable license for all legal purposes if seeking government contracts.
This guidance is part of a wider initiative to bolster AI services procurement across the government, mirroring potential Pentagon measures for its military contracts. As a result, the GSA has terminated Anthropic’s OneGov agreement, affecting its availability to federal branches.
ALSO READ
-
Trump Administration Halts Treasury's Oil Futures Move
-
US News: Trump Administration's Ambitious Moves and Controversial Decisions
-
Power Plays: AI Contracts, Political Appointments, and Asylum Rights
-
Trump Administration Pushes Defense Contractors to Boost Weapon Output
-
Trump Administration Weighs Tencent's Gaming Future