Open Letter Supporting SB1047 from Academics

Dear Governor Newsom,

As academic researchers, we have observed the debate around California’s SB 1047. We have differing views about the bill and the risks it would address, but we all believe that it should be signed into law.

Advances in general purpose AI systems have the potential to better the lives of everyone. We think that AI advances can help make us all more productive and revolutionize education, healthcare, and scientific research. In order to secure these benefits, we need to understand and mitigate the downsides for our economy and critical infrastructure.

About SB 1047

Background: SB 1047 is pending legislation in the state of California. It would require that developers of the most advanced AI systems, costing over $100 million to train, test their models for the potential to materially enable severe risks and implement reasonable safeguards to prevent those risks.

Under SB 1047, a developer of an AI model can be held financially responsible for damages if all of the following are true:

  • The AI model causes or materially enables mass casualties or $500 million in damage from chemical, biological, radiological or nuclear weapons; a cyberattack on critical infrastructure; an autonomously-executed crime, or another equally grave harm.

  • AND The model was trained with more than $100 million and 10^26 FLOP (more than any model that exists today).

  • AND The AI model could not have simply been swapped out for a search engine, weak AI model, or publicly available information and still caused the same harm.

  • AND The developer failed to take reasonable care to avoid unreasonable risk of such an outcome.

The Science and Economics of AI Safety

Products that are safe to use are necessary for a functioning economy.  While opinions disagree regarding the speed at which AI development will progress, we believe there is a chance of increasingly dangerous systems being developed in the next few years. We are also concerned there is a looming threat of severe damages to the economy if AI systems are allowed to cause serious harm without accountability.  A variety of researchers have made the case for such risks at length, and while we are not all in agreement about which risks are most concerning, in general, we think that the risk of future AI systems enabling mass harm is real, and so is the risk of damage to our economy from unaccountable systems. In particular, we are especially concerned about models more advanced than any today—the kind covered by SB 1047.

AI developers in industry have already voluntarily committed to evaluating their models for serious risks. For example, at the Seoul AI Summit, AI developers committed to:

Set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable. Assess whether these thresholds have been breached, including monitoring how close a model or system is to such a breach…In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.

We support these commitments, along with those made at the White House, and hope that developers continue to step up their investment in risk evaluation and mitigation. An important reason why this particular AI bill is light-touch and adaptive is that it does not prescribe a specific way to evaluate and mitigate risks. Because it only refers to reasonable care and evolving standards, it is flexible. Liability will encourage more research in these methods while incentivizing all the leading AI labs to keep their safety methodology up to date with the evolution of AI misuse and safety and security standards. We disagree with some opponents who have said science needs to be more developed prior to requiring any evaluations or safeguards. As per their previous commitments, developers are already taking steps today to evaluate their systems and keep risks at a reasonable level. This is very good, but we do not think that it should be optional.

Voluntary commitments are insufficient. AI developers face immense pressure to release models quickly, and it is the unfortunate reality that these commercial incentives can erode safety and encourage companies to cut corners on quality control. Without regulation, those companies who take responsible precautions are placed at a competitive disadvantage. SB 1047 counteracts this dynamic by holding developers responsible if they fail to take reasonable care to prevent severe harms.

Liability

We believe that AI developers do bear some responsibility for the consequences of their creations, even if they were not intended. In a free market, liability is the way that market participants can be made to *take* responsibility for their actions. It ensures that to the extent they can mitigate the risks, they will. Furthermore, if liability changes the sign of cost-benefit calculus of releasing a model in its current form, that could only be because the developer itself believes the risk of their model causing or materially enabling catastrophic harm is too high. Liability is not unprecedented, including for AI: AI developers have a duty to take reasonable care under existing tort law, and they are liable if they do not. It is normal for companies to be held responsible when they behave negligently and their products cause serious harm. And as some of us have pointed out, regulations have improved the safety of many other complex technologies, such as those used in food, medicines, and buildings. The developers of those technologies often have obligations to test their products before they release them. AI is, and should be, no different.

Open models and the technology layer vs. the application layer

Some oppose SB 1047 because they believe the bill could make it more difficult to openly release the weights of certain AI models. As academic researchers, many of us work with both open- and closed-weight models on a daily basis. Our work can help improve the safety of these systems, and open models can improve our ability to do certain kinds of research. Much as researchers in other fields sometimes conclude that their research poses more risks than benefits, we should acknowledge that this may be true of future AI research on some AI models. With appropriate liability, developers will be incentivized to avoid releasing open-weight models in cases where the risks are real and unreasonable.

To be clear: when harm does occur, somebody will bear the cost, and to the extent possible, it shouldn’t be the victims of that harm. The cost should be placed on the actors best placed to prevent the harm. Some have proposed that all liability should be at the “application layer,” rather than the “model layer.” To be sure, we agree that anyone deploying AI in a risky or dangerous setting should bear responsibility for risks they impose on others, and SB 1047 would not preclude this. But if a malicious actor with no regard for the law uses a model to inflict massive damage, what is the “application”? Surely, the malicious actor should be held responsible, but we disagree with assertions that model developers should have no responsibility at all.

Once a model is released open-weight, it can never be taken back. Decisions about whether to release future powerful AI models should not be taken lightly, and they should not be made purely by companies that don’t face any accountability for their actions. Under SB 1047 companies would still make these decisions alone, but they would face accountability in the event of harm or damage to our economy. We think it is reasonable for SB 1047 to require developers spending hundreds of millions of dollars to train models to provide transparency into their safety plans, take reasonable care to prevent severe harm, and be held accountable if they fail to do so and severe harm results.

This might cause AI developers to think more carefully about releasing models in the future, whether open or closed, and how to modify the models to reduce the risk of serious harm. But this would not be a bad thing. Companies should have incentives to make sure their releases are justified and do not impose unreasonable risks on the broader public, and they should have incentives to race to the top in terms of safety, rather than racing to the bottom in terms of release date.

Overall, SB 1047 is a reasonable bill that would encourage AI developers to properly consider the public interest when developing and releasing powerful AI models. The bill will not solve every risk from AI, nor does it aim to. But it is a solid step forward.

We believe SB 1047 will contribute positively to both scientific progress and public safety, and we urge our colleagues, policymakers, and the public to support it as well.

Sincerely,*

Stuart Russell
Professor of Computer Science at UC Berkeley

Bin Yu
Distinguished Professor of Stats, EECS, Center for Computational Biology, Simons Institute for the Theory of Computing at UC Berkeley

Geoffrey Hinton
Emeritus Professor of Computer Science at University of Toronto & Turing Award winner

Yoshua Bengio
Professor of Computer Science at University of Montreal / Scientific Director at Mila & Turing Award Winner

Hany Farid
Professor of EECS and School of Information at UC Berkeley

Lawrence Lessig
Professor of Law at Harvard Law School

AnnaLee Saxenian
Professor, School of Information at UC Berkeley

Paul S. Rosenbloom
Professor Emeritus of Computer Science at the University of Southern California

Anthony Aguirre
Professor of Physics at UC Santa Cruz

Mark Nitzberg
Executive Director at UC Berkeley Center for Human-Compatible AI

Jessica Newman
Director of the AI Security Initiative at UC Berkeley

Anthony M. Barrett
Visiting Scholar at UC Berkeley Center for Long-Term Cybersecurity AI Security Initiative

Paul N. Edwards
Co-Director, Stanford Existential Risks Initiative, Stanford University

Lionel Levine
Professor of Mathematics at Cornell University
PhD from UC Berkeley

Michael Osborne
Professor of Machine Learning at Oxford University

Gary Marcus
Professor Emeritus at NYU

Vincent Conitzer
Professor of Computer Science and Philosophy at Carnegie Mellon University
Former California Resident

Scott Niekum
Associate Professor of Computer Science at UMass Amherst

Brad Knox
Research Associate Professor of Computer Science at the University of Texas at Austin

Max Tegmark
Professor at MIT/Center for AI & Fundamental Interactions
PhD from UC Berkeley

Lydia Liu
Assistant Professor of Computer Science at Princeton University
PhD from UC Berkeley

Toby Ord
Senior Researcher in AI Governance, AI Governance Initiative at the University of Oxford

William MacAskill
Associate Professor of Philosophy at the University of Oxford

Yonathan Arbel
Professor of Law, University of Alabama School of Law
JSM from Stanford University

David Rubenstein
Professor of Law at Washburn University School of Law

Tegan Maharaj
Assistant Professor at the University of Toronto 

Bart Selman
Professor of Computer Science at Cornell University

Justin Bullock
Associate Professor at Texas A&M University

Richard Dazeley
Professor of Artificial Intelligence at Deakin University

Peter N. Salib
Assistant Professor of Law at the University of Houston

Jaime Fernández Fisac
Assistant Professor of Electrical and Computer Engineering at Princeton University

Noam Kolt
Assistant Professor at Hebrew University

Matthew Tokson
Professor of Law at the University of Utah

Brian Judge
AI Policy Fellow at UC Berkeley

Andrew Critch
Research Scientist in EECS at UC Berkeley

David Evan Harris
Chancellor’s Public Scholar at UC Berkeley

Michael K. Cohen
Postdoc in Computer Science at UC Berkeley

Benjamin Plaut
Postdoc in Computer Science at UC Berkeley

Cameron Allen
Postdoc in Computer Science at UC Berkeley

Khanh Nguyen
Postdoctoral Research Fellow in AI/ML at UC Berkeley

Bhaskar Mishra
PhD Student in Computer Science at UC Berkeley

Rachel Freedman
PhD Student in Computer Science at UC Berkeley

Cassidy Laidlaw
PhD Student in Computer Science at UC Berkeley

Alexander Kastner
PhD Student in Mathematics at UC Berkeley

Micah Carroll
PhD Student in Computer Science at UC Berkeley

Shreyas Kapur
PhD Student in Computer Science at UC Berkeley

Niklas Lauffer
PhD Student in Computer Science at UC Berkeley

Felix Binder
PhD Student in Cognitive Science at UC San Diego

Tess Hegarty
PhD Student in Civil Engineering at Stanford University

Erik Jenner
PhD Student in Computer Science at UC Berkeley

Aly Lidayan
PhD Student in Computer Science at UC Berkeley

Jared Moore
PhD Student in Computer Science at Stanford University

Anthony Ozerov
PhD Student in Statistics at UC Berkeley

Victor Lecomte
PhD Student in Computer Science at Stanford University

Jaiden Fairoze
PhD Student in Computer Science at UC Berkeley

Carter Allen
PhD Student in Business Administration at UC Berkeley

Pratyusha Ria Kalluri
PhD Student in Computer Science at Stanford

Zachary Rewolinski
PhD Student in Statistics at UC Berkeley

Fazl Barez
Postdoc in Machine Learning at the University of Oxford

Lorenzo Pacchiardi
Postdoc at the Leverhulme Centre for the Future of Intelligence, University of Cambridge

Ben Smith
Courtesy Postdoctoral Fellow at the University of Oregon
PhD from the University of Southern California

Stephen Casper
PhD Student in Computer Science at MIT

Nikolaus Howe
PhD Student in Computer Science at the University of Montreal

Dmitrii Krasheninnikov
PhD Student in Machine Learning at the University of Cambridge

Adam Dionne
PhD Student in Applied Physics at Harvard University

Chad DeChant
PhD Student in Computer Science at Columbia University

Jenny Kaufmann
PhD Student in Mathematics at Harvard University

Josh Engels
PhD Student in Computer Science at MIT

Leon Lang
PhD Student in Machine Learning at the University of Amsterdam

Lewis Hammond
PhD Student in Computer Science at the University of Oxford

David Atkinson
PhD Student in Computer Science at Northeastern University

Jacob Pfau
PhD Student at Center for Data Science at NYU

Andi Peng
PhD Student in Computer Science at MIT

Lauro Langosco
PhD Student in Engineering at the University of Cambridge

Benjamin Hayum
PhD Student in Computer Science at the University of Wisconsin-Madison

Giacomo Petrillo
PhD Student in Statistics at the University of Florence

Katya Ivshina
PhD Student in Applied Mathematics at Harvard University

Syed Jafri
PhD Student in Computer Science at the University of Utah

Aidan Kierans
PhD Student in Computer Science at the University of Connecticut

To sign this letter, please fill out this form.

*Titles and affiliations are for identification purposes only; signatures are made by signatories in their personal capacities and do not represent the views of any institution.