Microsoft Challenges Pentagon Over Anthropic AI Blacklisting
Microsoft is supporting Anthropic in opposing a federal court's designation of the AI company as a national security threat, blocking its military work. The conflict arose after Anthropic denied unrestricted military use of its AI model, Claude, leading to a lawsuit against the Trump administration.
Microsoft is backing Anthropic in a legal challenge against the Pentagon's decision to classify the AI company as a supply chain risk, effectively barring it from military contracts. The designation was applied after disputes over Anthropic's control over its AI model Claude.
The Trump administration ordered federal agencies to cease using Claude, a decision Microsoft claims could have significant economic repercussions. The tech giant argues that the designation is being used to settle contractual disputes inappropriately, and has requested a temporary suspension in federal court.
Microsoft also defends Anthropic's ethical stance against using AI for mass surveillance or autonomous warfare, aligning with broader American values. The Pentagon has not commented on the ongoing legal proceedings.
ALSO READ
-
AI-Controlled Toys: Ensuring Children's Safety
-
Turbulent Debates in Punjab Assembly Over Khaira's Remarks
-
Supreme Court's Landmark Decision: A Father's Painful Journey Toward Dignity
-
Global Markets Jitter as Middle East Conflict Raises Inflation Concerns
-
Righteous Fury: Pakistan's Operation Against Afghan Taliban