Scoopfeeds — Intelligent news, curated.
Spooked by Mythos, Trump suddenly realized AI safety testing might be good
computer-science

Spooked by Mythos, Trump suddenly realized AI safety testing might be good

Ars Technica · May 6, 2026, 9:20 PM · Also reported by 2 other sources

This week, the Trump administration backpedaled and signed agreements with Google Deep Mind, Microsoft, and x AI to run government safety checks on the firms' frontier AI models before and after their release. Previously, Donald Trump had stubbornly cast aside the Biden-era policy, dismissing the need for voluntary safety checks as overregulation blocking unbridled innovation. Soon after taking office, he took the extra step of rebranding the US AI Safety Institute to the Center for AI Standards and Innovation (CAISI), removing "safety" from the name in a pointed jab at Joe Biden. But after Anthropic announced that it would be too risky to release its latest Claude Mythos model—fearing that bad actors might exploit its advanced cybersecurity capabilities—Trump's suddenly concerned about AI safety. According to White House National Economic Council Director Kevin Hassett, Trump may soon issue an executive order mandating government testing of advanced AI systems prior to release, Fortune reported.Read full article Comments

Article preview — originally published by Ars Technica. Full story at the source.
Read full story on Ars Technica → More top stories

Also covered by

Aggregated and edited by the Scoop newsroom. We surface news from Ars Technica alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop