Tech giants Microsoft, Google, and xAI have agreed to allow the US federal government access to their new artificial intelligence models for national security testing. This comes after the Center for AI Standards and Innovation at the Department of Commerce announced the agreement, following concerns about the potential risks posed by Anthropic’s Mythos model. Under the deal, the US government will evaluate the models before deployment to assess their capabilities and security risks. Microsoft will collaborate with government scientists to test AI systems for unexpected behaviors, while also developing shared data sets and workflows for testing.
Why It Matters
The agreement between tech giants and the US government to test AI models for national security risks is significant in light of growing concerns over the potential threats posed by advanced AI systems. By partnering with technology companies to evaluate these models early on, US officials aim to identify and address potential threats such as cyberattacks and military misuse before widespread deployment. The move builds on previous agreements and reflects the government’s commitment to understanding the implications of frontier AI technology for national security.
Want More Context? 🔎
Loading PerspectiveSplit analysis...