Americans for Responsible Innovation urges US to mandate AI safety reviews for government contracts
The nonprofit wants frontier AI developers vetted before they can sell to Uncle Sam, a move that could ripple across the broader tech and crypto landscape.
Americans for Responsible Innovation, a nonprofit focused on AI policy, is pushing the Trump administration to require mandatory safety reviews for any AI lab that wants a shot at US government contracts. The recommendation, made on May 11, 2026, targets developers building so-called “frontier” models, the most powerful and potentially dangerous AI systems in existence.
ARI’s proposal centers on structured safety evaluations that would serve as a gating mechanism for federal procurement. An AI lab building frontier models would need to demonstrate that its systems have been vetted for misuse potential before qualifying for any government contract.
On April 6, 2026, the group warned the General Services Administration about the risks embedded in vague “any lawful use” clauses found in existing AI procurement regulations. Those clauses, ARI argued, essentially give AI systems a blank check to operate without meaningful guardrails once they’re inside government systems.
ARI points to a 4.2x annual growth rate in AI computation since 2010, a trajectory that suggests the capabilities of frontier models are expanding far faster than the government’s ability to evaluate them. Meanwhile, 82% of the public doesn’t trust tech executives to regulate AI on their own.
On March 24, 2026, the CFTC announced a task force specifically aimed at regulating AI’s role in digital assets. ARI is relatively new to the AI regulation scene, and the group has not drawn explicit connections to cryptocurrency or digital assets in its recommendations.
If mandatory safety reviews become a reality, the most immediate impact falls on AI firms that derive significant revenue from government contracts. Compliance costs would rise. Timelines for securing contracts would lengthen. Smaller AI startups, which often lack the resources for extensive safety auditing, could find themselves locked out of the federal market entirely.
The CFTC’s task force on AI and digital assets suggests that regulators are already thinking about how AI governance intersects with blockchain-based systems. Investors in AI-related tokens and projects should pay attention to how this proposal is received by the administration and by Congress, as safety mandates gaining political momentum would require the market to price in higher compliance costs across the AI sector, including firms building at the intersection of AI and crypto.
Earn with Nexo