How to Track the Fate of AI Security Testing Commitments: A Guide to Monitoring Government-Industry Agreements

By • min read

Introduction

In early May 2023, the US Commerce Department published details of a voluntary agreement under which Microsoft, Google, and xAI pledged to submit their most advanced AI models to government scientists for security testing before public release. Within weeks, that page was taken down — a move that raised questions about transparency and the stability of AI safety commitments. This how-to guide walks you through the process of understanding what happened, verifying changes to official records, and staying informed about similar agreements in the future. Whether you are a researcher, journalist, or concerned citizen, these steps will help you navigate the complex landscape of government-industry AI safety collaboration.

How to Track the Fate of AI Security Testing Commitments: A Guide to Monitoring Government-Industry Agreements
Source: thenextweb.com

What You Need

Step-by-Step Guide

Step 1: Gather Context About the Original Agreement

First, you need to understand what the agreement was. On May 5, 2023, the US Commerce Department posted a page stating that Microsoft, Google, and xAI had agreed to let government scientists test their frontier AI models for security flaws before public release. This was part of a voluntary safety pledge that aimed to reduce risks from advanced AI systems. Search for archived versions of the page using tools like the Wayback Machine or cached search results. Look for key details: the exact date, the companies involved, and the scope of testing. This background is critical because without it, you cannot fully appreciate the significance of the removal.

Step 2: Confirm the Removal of the Page

On the day you start your investigation, visit the original URL of the Commerce Department's page. If you are unsure of the exact link, search for phrases like "US Commerce Department AI security testing agreement Microsoft Google xAI" or similar terms. In this case, the page is no longer accessible. Document the HTTP status (e.g., 404 Not Found) or any redirect message. Take a screenshot and note the date and time. You can also use a tool like httpstatus.io to verify the removal. This step establishes the fact that the details have been deliberately deleted or hidden.

Step 3: Find Alternative Sources for the Original Content

Since the official page is gone, turn to news reports. The Reuters article from Monday (the day after the removal was noticed) contains the core facts. Other outlets like The Next Web may have republished the story. Use a news aggregator or search engine with keywords such as "Commerce Department removes AI testing agreement page." Read multiple reports to cross-check details. For the most reliable record, look for cached versions of the Reuters article or direct quotes from government officials. This step ensures you have the same information that was once on the government site.

Step 4: Analyze the Implications of the Removal

Now that you have the facts, consider why the removal matters. The deletion could signal a shift in policy, a lack of commitment to transparency, or simply an administrative mistake. Contrast the original agreement's promise of safety checks with the opaque removal. Note that the companies themselves have not publicly commented on the change. This step is about connecting the dots: the agreement was voluntary, and without public documentation, accountability decreases. You might also compare this with similar safety commitments from other governments (e.g., the UK AI Safety Summit) to see if transparency is consistent globally.

How to Track the Fate of AI Security Testing Commitments: A Guide to Monitoring Government-Industry Agreements
Source: thenextweb.com

Step 5: Stay Updated and Track Future Changes

The removal does not necessarily mean the agreement is dead. Monitor official government pages, press releases, and reputable tech news for any reinstatement or new announcements. Set up Google Alerts for phrases like "Commerce Department AI security testing" or "Microsoft Google xAI safety pledge." Follow the companies' own blogs and press rooms. If the page reappears or if a formal statement is made, you will be the first to know. You can also engage with civil society organizations that track AI governance, such as the Center for AI Safety or the Partnership on AI. This final step transforms your research into ongoing vigilance.

Tips for Effective Monitoring

Back to top

Recommended

Discover More

Kubernetes v1.36 Ships Volume Group Snapshots: A Milestone for Data ConsistencyBreaking: Developers Ditch Tailwind's Color System for Open AlternativesFrom Interviews to Insights: A Practical Guide to Understanding Rust's Community ChallengesBitcoin as a Global Reserve Asset: Eric Trump and John Koudounis on $1M Targets and Institutional Shifts10 Essential Facts About KV Cache Compression with TurboQuant