FOR IMMEDIATE RELEASE: September 29, 2024 

MEDIA CONTACTS:

Center for AI Safety Action Fund: media@safe.ai 

Encode Justice: comms@encodejustice.org 

Economic Security California Action: jenna@economicsecurity.us

Governor Newsom Vetoes AI Safety Bill SB 1047, Disappointing Advocates

Groundbreaking bill brought AI safety to forefront of a national movement for action, with supporters committed to continue working on AI safety legislation

SACRAMENTO, CA — Today, Governor Gavin Newsom vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. With 32 of the Forbes top 50 AI companies based in California, this decision is a setback for responsible AI development around the world.

SB 1047, authored by Senator Scott Wiener (D-San Francisco), would have required the biggest California companies developing the next generation of the most powerful AI models to conduct safety testing and mitigate foreseen risks to protect society from AI being used to conduct cyberattacks on critical infrastructure, develop chemical, nuclear or biological weapons or unleash automated crime.

Statements from Senator Wiener and the SB 1047 co-sponsors follow.

Senator Scott Wiener (D-San Francisco), author of SB 1047:

“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” said Senator Wiener. “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.

“The Governor’s veto message lists a range of criticisms of SB 1047: that the bill doesn’t go far enough, yet goes too far; that the risks are urgent but we must move with caution. SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd.

“While we would have welcomed this input from his office during the legislative process when there was time to make changes to the bill, I am glad to see the Governor agree that the risks presented by AI are real and that California has a role to play in mitigating them. AI continues to advance very rapidly, and the risks these systems present advance along with them. Regulators must be willing to grapple with that reality and take decisive action that protects our innovation ecosystem as we craft regulations for this emerging industry. I look forward to engaging with the Governor’s AI safety working group in the Legislature next year as we work to ensure that the safeguards California enacts adequately protect the public while we still have an opportunity to act before a catastrophe occurs.

“This veto is a missed opportunity for California to once again lead on innovative tech regulation — just as we did around data privacy and net neutrality — and we are all less safe as a result.

“At the same time, the debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage. Major AI labs were forced to get specific on the protections they can provide to the public through policy and oversight. Leaders from across civil society, from Hollywood to women’s groups to youth activists, found their voice to advocate for commonsense, proactive technology safeguards to protect society from foreseeable risks. The work of this incredible coalition will continue to bear fruit as the international community contemplates the best ways to protect the public from the risks presented by AI.

“California will continue to lead in that conversation — we are not going anywhere.”

Nathan Calvin, Senior Policy Counsel at Center for AI Safety Action Fund, said:

"We are disappointed by Governor Newsom's decision to veto this urgent and common sense safety bill. Experts have noted catastrophic threats to society from AI may materialize quickly, so today’s veto constitutes an unnecessary and dangerous gamble with the public’s safety.  With rapidly growing investment in AI and increasing potential for this technology to be used for good and harm, AI safety is a critical issue that is here to stay. People want their leaders to take action and we remain committed to advocating fiercely for AI safety in California. This bill inspired a national movement for action on AI safety and we’re just getting started.”

Teri Olle, Director of Economic Security California Action, added:

“Governor Newsom’s veto of SB 1047 forfeits our country’s most promising opportunity to implement responsible guardrails around the development of AI today. The failure of this bill demonstrates the enduring power and influence of the deep pocketed tech industry, driven by the need to maintain the status quo – a hands-off regulatory environment and exponential profit margins. The vast majority of Californians, and American voters, want their leaders to prioritize AI safety and don't trust companies to prioritize safety on their own. This veto exposes a dangerous disconnect between public interest and policy action when it comes to AI – a disconnect that needs urgent repair." A more detailed release from ESCA is here.

Sunny Gandhi, Vice President of Political Affairs at Encode Justice, stated:

"This veto is disappointing but we will not be stopped by it. The bill energized youth leaders across the country eager to see common-sense AI safety reform. AI is an exciting technology that will define the future but it’s too powerful to be unleashed in a way where youth inherit a world bearing the costs of what gets broken along the way. Without safeguards like this, AI systems may soon be used to cause catastrophic harm to society, such as disrupting the financial system, shutting down the power grid, or creating biological weapons, leading to even more public distrust in AI. Our fight for AI safety continues. We will push for responsible AI governance that protects public safety while fostering innovation. We don’t have to choose when we deserve both."

The sponsors of SB 1047 are committed to advancing AI safety measures and will continue to work towards the enactment of strong AI safety law.

A broad bipartisan coalition came together to support SB 1047, including over 70 academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), the California legislature, 77% of California voters, 120+ employees at frontier AI companies, 100+ youth leaders, unions (including SEIU, SAG-AFTRA, UFCW, the Iron Workers, and the California Federation of Labor Unions), 180+ artists (primarily from Hollywood), the National Organization for Women, Parents Together, the Latino Community Foundation, various start-ups developing or using AI (including Imbue, Magic.Dev, and Notion), non-profit organizations, and more. Over 4,000 supporters called the Governor’s office in September asking him to sign the bill, and over 7,000 signed a petition in support of the bill sponsored by Accountable Tech. Anthropic said it believes SB 1047 is good on balance, and the bill was also praised by Vitalik Buterin (co-founder of Ethereum) and Elon Musk (founder of xAI).

The following people/groups are also available for interview or statement. To reach them, contact koji@safe.ai to request an interview or comment: 

  • Academic experts: 

    • Dr. Yoshua Bengio, Professor of Computer Science at Université de Montréal & Turing Award winner

    • Dr. Larry Lessig, Professor of Law at Harvard Law School & founder of Creative Commons

    • Dr. Stuart Russell, Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI

  • Former AI company employees: 

    • Cullen O’Keefe, formerly Policy Frontiers Research Lead at OpenAI, 

    • Jeffrey Wu, formerly Scalable Alignment Lead at OpenAI

    • William Saunders formerly Interpretability Lead at OpenAI

  • SAG-AFTRA Leaders and Artists: 

    • Joely Fisher, Actor and Secretary-Treasurer

    • Frances Fisher, Actor and National Board Member

    • Sean Astin, Actor and National Board Member

    • Jason George, Actor and National Board Member

    • Mark Ruffalo, Actor and Activist

  • Bianca Recto, Communications Director at Accountable Tech