August 7, 2024

Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell

Dear Governor Newsom, Senate President pro Tempore McGuire, and Assembly Speaker Rivas,

As senior artificial intelligence technical and policy researchers, we write to express our strong support for California Senate Bill 1047. Throughout our careers, we have worked to advance the field of AI and unlock its immense potential to benefit humanity. However, we are deeply concerned about the severe risks posed by the next generation of AI if it is developed without sufficient care and oversight. 

SB 1047 outlines the bare minimum for effective regulation of this technology. It doesn’t have a licensing regime, it doesn’t require companies to receive permission from a government agency before training or deploying a model, it relies on company self-assessments of risk, and it doesn’t even hold companies strictly liable in the event that a catastrophe does occur. Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation. It would be a historic mistake to strike out the basic measures of this bill - a mistake that will become even more evident within a year when the next generation of even more capable AI systems is released. 

As AI rapidly progresses, we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm. These risks are compounded as firms work on developing autonomous AI agents that can take significant actions without human direction and as these systems become more capable than humans across a broader variety of domains. The challenge of developing incredibly capable AI systems safely should not be underestimated.

Some AI investors have argued that SB 1047 is unnecessary and based on "science fiction scenarios." We strongly disagree. The exact nature and timing of these risks remain uncertain, but as some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary. If the risks are really science fiction, then companies should have no issue with being held accountable for mitigating them. If the risks do end up materializing, it would be irresponsible to be underprepared. We also believe there is a real possibility that without appropriate precautions, some of these catastrophic risks could emerge within years, rather than decades. 

Opponents also claim this bill will hamper innovation and competitiveness, causing startups to leave the state. This is false for multiple reasons:

  • SB 1047 only applies to the largest AI models, models that cost over $100,000,000 to train - costs out of reach for all but the largest startups. 

  • Large AI developers have already made voluntary commitments to take many of the safety measures outlined in SB 1047. 

  • SB 1047 is less restrictive than similar AI regulations in Europe and China

  • SB 1047 applies to all developers doing business in California, regardless of where they are headquartered. It would be absurd to expect the large companies impacted to completely withdraw from the 5th largest economy in the world rather than comply with basic measures around safety testing and common-sense guardrails. 

  • Finally, at a time when the public is losing confidence in AI and doubting whether companies are acting responsibly, the basic safety checks in SB 1047 will bolster the public confidence that is necessary for AI companies to succeed.

Airplanes, pharmaceutical drugs, and a variety of other complex technologies have been made remarkably safe and reliable through deliberate effort from industry and governments to make it so. (And when regulators have relaxed their rules to allow self-regulation, as in the case of Boeing, the results have been horrific both for the public and for the industry itself.) We need a comparable effort for AI, and can’t simply rely on companies' voluntary commitments to take adequate precautions when they have such massive incentives to do otherwise. As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.

In particular, we strongly endorse SB 1047’s robust whistleblower protections for employees who report safety concerns at AI companies. Given the accounts of “reckless” development from employees at some frontier AI companies, such protections are clearly needed. We cannot simply blindly trust companies when they say that they will prioritize safety. 

In a perfect world, robust AI regulations would exist at the federal level. But with Congress gridlocked, and the Supreme Court’s dismantling of Chevron deference disempowering administrative agencies, CA state laws have an indispensable role to play. California led the way in green energy and consumer privacy and can do it again on AI. President Biden and Governor Newsom’s respective AI executive orders are both a good start towards recognizing these risks, but there are limits on what can be accomplished without new legislation. 

The choices the government makes now about how to develop and deploy these powerful AI systems may have profound consequences for current and future generations of Californians, as well as those around the world. We believe SB 1047 is an important and reasonable first step towards ensuring that frontier AI systems are developed responsibly, so that we can all better benefit from the incredible promise AI has to improve the world. We urge you to support this landmark legislation. 

Sincerely,

Yoshua Bengio
Professor of Computer Science at Université de Montréal & Turing Award winner

Geoffrey Hinton
Emeritus Professor of Computer Science at University of Toronto & Turing Award winner

Lawrence Lessig
Professor of Law at Harvard Law School & founder of Creative Commons

Stuart Russell
Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI