Blitz Bureau
NEW DELHI: DEFENCE Secretary Pete Hegseth gave Anthropic’s CEO a deadline till February 27 to open the company’s artificial intelligence technology for unrestricted military use or risk losing its government contract, according to a person familiar with their meeting on February 24, AP reported.
Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new U.S. military internal network. CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.
Defence officials warned they could designate Anthropic a supply chain risk or use the Defence Production Act to essentially give the military more authority to use its products even if it doesn’t approve of how they are used, according to the person familiar with the meeting and a senior Pentagon official. The Pentagon objects to Anthropic’s ethical restrictions because military operations need tools that don’t come with built-in limitations, the senior Pentagon official said.
The Pentagon announced last summer that it was awarding defence contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.
Anthropic was the first AI company to get approved for classified military networks. Musk’s xAI company, which operates the Grok chatbot, says Grok also is ready to be used in classified settings.

























