OpenAI’s letter to Governor Newsom on harmonized regulation

OpenAI News
OpenAI’s letter to Governor Newsom on harmonized regulation

The US faces an increasingly urgent choice on AI: set clear national standards, or risk a patchwork of state rules—some subset of the 1,000 moving through state legislatures this year—that could slow innovation without improving safety. Imagine how hard it would have been to win the Space Race if California’s aerospace and tech industries had been tangled in state-by-state regulations impeding transistor innovation. That’s why we’ve just sent a letter to Gov. Gavin Newsom calling for California to lead the way in harmonizing state-based AI regulation with national—and, by virtue of US leadership, emerging global—standards.

We’re calling for a different approach: have companies adhere to federal and global safety guidelines while creating a national model for other states to follow. We recently became one of the first large language model companies to commit to working with the US government and its new Center for AI Standards and Innovation (CAISI) to evaluate frontier models’ national security-related capabilities. In our letter, we urge avoiding duplication and inconsistencies between state requirements and the safety frameworks already being advanced by the US government and our democratic allies.

As we outlined in our recent California Economic Impact Report, the AI sector is already driving tremendous innovation and adding billions to the state’s budget, with the potential to deliver even more jobs, growth, and opportunity. California is uniquely positioned to strengthen its status as the world’s fourth-largest economy—but only if it continues to create the conditions for AI to thrive here.

In particular, we recommend that California treat frontier model developers as compliant with state requirements when they have entered into a safety-oriented agreement with a relevant US federal agency, like CAISI, or when they’ve signed onto a parallel framework such as the EU’s AI Code of Practice (which we also signed).

As importantly, we urge the state to continue supporting smaller developers and startups so they don’t face compliance burdens designed for bigger companies. Large firms can absorb those costs; early-stage teams often cannot. Exempting smaller developers from duplicative state rules would help keep California’s AI ecosystem vibrant. We don’t want to inadvertently create a “CEQA for AI innovation” that would result in California dropping from leading in AI to lagging behind other states or even countries—much as 1970's California Environmental Quality Act made it dramatically harder to build housing in the state because its broader impacts weren’t well understood.

Finally, aligning California with standards being adopted by the US government will help ensure the state is supporting the strategic imperative to build on US-led, democratic AI and not autocratic AI. AI companies in the communist-led People’s Republic of China (PRC) aren’t likely to abide by US state laws—and will actually benefit from patchwork regulations that bog down their US competitors in inconsistent standards.

Since OpenAI is a nonprofit dedicated to building AI that benefits all of humanity, we think that building democratic AI anchored in democratic values, inclusive of safety standards, is foundational to our mission. That includes ensuring broad access to AI so its benefits reach as many people as possible—not just a few.

From our previous submissions to Gov. Newsom and OpenAI CEO Sam Altman’s recent testimony on Capitol Hill, OpenAI has consistently argued that consistent federal guidelines are the best way to spur innovation, level the playing field for start-ups, and maintain America’s edge over the PRC. With California home to many of the world’s leading AI companies, the state has a direct stake in ensuring those rules are clear and consistent nationwide. In the Intelligence Age, clarity on AI isn’t optional—it’s essential to keeping California and the country ahead.

Read the full letter here⁠(opens in a new window).

Equipping workers with insights about compensation Global Affairs Mar 17, 2026

Ensuring AI use in education leads to opportunity Global Affairs Mar 5, 2026

New tools for understanding AI and learning outcomes Global Affairs Mar 4, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.