
Anthropic and OpenAI are hiring experts in chemical weapons and explosives to develop safeguards against the misuse of AI technologies, especially in military contexts. Anthropic's AI model Claude has been used by US defense agencies for intelligence and operational planning, but tensions arose after the Pentagon labeled Anthropic a 'supply chain risk' and ordered its technology's phase-out. Both companies aim to address risks of AI-enabled chemical and biological threats amid growing concerns over AI's role in warfare.
Select a news story to see related coverage from other media outlets.