May 20, 2024
RE: Senate Bill 1047
An Open Letter to the AI Community
In early February of this year, I introduced Senate Bill 1047, legislation intended to promote safe innovation and deployment of frontier artificial intelligence models. I introduced the bill after publishing a detailed outline of the bill last September — eight months ago — in order to transparently gather feedback.
If you only read one thing in this letter, please make it this: I am eager to work together with you to make this bill as good as it can be. There are over three more months for discussion, deliberation, feedback, and amendments. You can also reach out to my staff anytime, and we are planning to hold a town hall for the AI community in the coming weeks to create more opportunities for in-person discussion.
Over the past few weeks, a flurry of posts on social media — including both thoughtful, good faith concerns as well as some inaccurate, and at times inflammatory, information about the bill — have led to a lot of dialogue about SB 1047. I’m grateful for the engagement and want to articulate why I’m authoring this bill, what it actually does, what it doesn’t do, and how folks can engage.
I want this bill to be as good as it can be: A proposal that fosters both innovation and the safe development of frontier models. The two goals are not mutually exclusive; indeed, they complement one another. It is incredibly important to me for California — and particularly the great city of San Francisco — to continue to lead on AI innovation. It is also incredibly important to me for our state and city to lead on AI safety innovation, particularly given that Congress has yet to act. For those reasons, as described later in this letter, I personally rejected various policy proposals that I deemed too harsh and limiting.
Bottom line: SB 1047 doesn’t ban training or deployment of any models. It doesn’t require licensing or permission to train or deploy any models. It doesn’t threaten prison (yes, some are making this baseless claim) for anyone based on the training or deployment of any models. It doesn’t allow private lawsuits against developers. It doesn’t ban potentially hazardous capabilities. And it’s not being “fast tracked,” but rather is proceeding according to the usual deliberative legislative process, with ample opportunity for feedback and amendments remaining.
What SB 1047 *does* require is that developers who are training and deploying a frontier model more capable than any model currently released must engage in safety testing informed by academia, industry best practices, and the existing state of the art. If that testing shows material risk of concrete and specific catastrophic threats to public safety and security — truly huge threats — the developer must take reasonable steps to mitigate (not eliminate) the risk of catastrophic harm. The bill also creates basic standards like the ability to disable a frontier AI model while it remains in the developer’s possession (not after it is open sourced, at which point the requirement no longer applies), pricing transparency for cloud compute, and a “know your customer” requirement for cloud services selling massive amounts of compute capacity.
Our intention from the start has been for SB 1047 to allow startups to continue innovating unimpeded while imposing safety requirements only on the large and well-resourced developers building highly capable models at the frontier of AI development. Some have raised good faith concerns that the bill’s current language could place burdens on smaller developers and academic researchers that the bill aims to protect, and we are actively exploring changes to ensure startups and academics who are not spending the massive resources required to train frontier level models from scratch have clarity that they will face no new requirements under SB 1047.
What SB 1047 also does is create CalCompute, a public cloud funded with both public and private funds in order to provide access to developers and academics who have been priced out of the AI tech boom by large companies.
I am determined to get this right. I am tremendously excited by AI’s potential to do good and committed to supporting innovation in this space by start-ups, academics, labs, and open source developers.
* * *
SB 1047 is our attempt to ensure California continues to develop AI responsibly while protecting the freedom to innovate that makes our state such a unique place to build. We’ve worked to achieve that goal by building a framework around three overarching principles:
Set safety and mitigation requirements that are reasonable and possible for developers to meet without hindering their businesses.
Focus on only the most capable models and only on concrete risks that go beyond what’s possible today with tools like Google.
Protect the freedom to innovate, including open sourcing
Designing policy that meets these objectives is challenging, and we’ve relied heavily on input from academics as well as industry players at small and large companies alike to strike the balance here. You have my commitment we’ll continue to do so, and if you have ideas for how we can better achieve these goals, I want to hear them.
In particular, since the uptick of attention online about SB 1047, I've had meetings with many people in the open source community, startup community, and academia, and I am currently considering changes to the bill, such as amendments to its definitions of covered model and derivative model. I welcome continued feedback and would love to meet with others who have constructive ideas on how the bill can better achieve our objectives of AI innovation and safety.
In the spirit of open discourse, below I lay out more details about what the bill does and our thought process behind it in response to some of some of the recent discussion:
SB 1047 is light touch safety regulation
My team and I designed SB 1047’s framework to be as light touch and pro-innovation as possible, especially for the startups that power so much innovation in the space. Because of my strong desire for this bill to be light touch, I personally rejected various ideas that have been promoted by major voices in the AI space, including automatic liability for damage caused (I rejected that), a requirement to obtain a license from the state before releasing a frontier model (I rejected that), and the ability of any member of the public to sue a developer for harm caused (I rejected that).
Under SB 1047, developers of the largest models must work to identify and mitigate a narrow class of extraordinarily harmful, hazardous capabilities that the most powerful models might have in the future. They don’t have to reduce risk to zero — that would be impossible — but rather take reasonable steps based on industry best practices and the state of the art and science to reduce risk.
Thankfully, many AI companies are already conducting this testing. The testing being done today would substantially meet the requirements of SB 1047, and under the requirements of the President’s Executive Order, NIST is already hard at work developing an authoritative set of standards that will lay out the requirements more clearly as the new industry standard. But as more investment flows into the space and it becomes more and more competitive, the pressure to move as quickly as possible, potentially despite safety, will only increase. Requiring basic, highly achievable, reasonable safeguards makes sense to prevent cutting corners with one of the most powerful new technologies we’ve seen in decades.
It’s always possible the federal government will step up and grapple with these risks in a binding way. I would be very happy for that to happen, and that federal legislation would most likely preempt this bill. But unless and until that happens, California should not abdicate its responsibility to keep its residents safe from extreme harm.
SB 1047 doesn’t apply to the vast majority of startups
SB 1047’s requirements only apply to an extremely small set of AI developers making the largest, most cost-intensive models, today costing over $100 million dollars (models trained on 10^26 flop or those with similar capabilities). The vast majority of AI startups, and all AI application and use-case developers, would avoid these requirements because they are not training such models from scratch.
The bill applies only to concrete and specific risks of catastrophic harm
Today’s AI models don’t display the capability to cause the kind of harm that would trigger SB 1047. However, the National Institute of Standards and Technology, the Department of Homeland Security, and the godfather of AI himself all agree that future models could begin displaying these risks very soon and that empirical testing to detect them is important.
This bill specifically addresses potential severe risks to public safety from these future models, including mass casualties from the development of novel biological, chemical, or nuclear weapons, or damages exceeding $500 million to critical infrastructure or as a result of cyber-crime.
If a harm like this were to occur, it would impact the public’s trust in all AI products. It took just one collision for trust in the autonomous vehicle company Cruise to collapse to the point that the company had to cease operations. We should be taking basic precautions to avoid similar disasters for future AI models.
Our focus is on marginal risk
My intention with SB 1047 is to take a practical approach to the risks these models might soon pose. As frameworks put forward by Stanford and Princeton researchers argue, that means focusing on marginal risks that go significantly beyond what exists today with tools like Google. There’s a big difference between asking AI how to make a bomb (you can do that with a search engine) and using AI to design a novel biological weapon and come up with the most efficient plan to deploy it and evade accountability.
We’ve worked hard to draft language that reflects that focus on marginal risk, but it’s a complex and difficult task. If you have suggestions about how we can improve the language, we would love to hear them.
SB 1047 places reasonable and balanced requirements on large-scale developers
Many developers of today’s largest AI models are already testing for the capability to cause catastrophic harm. Our goal is simply to build on those industry best practices to ensure all developers are taking these basic steps as progress continues to accelerate.
If you test your extremely powerful AI model and it doesn’t show signs of extremely hazardous capabilities, then you’re in the clear under SB 1047. If you test your extremely powerful model and it does have extremely hazardous capabilities, then you must take reasonable precautions when releasing it.
SB 1047 is not command-and-control regulation delivered from on high by bureaucrats. Its enforcement provisions are narrow, modest, and targeted.
Shutdown requirements don’t apply once models leave your control
SB 1047 also requires that frontier models be able to be shut down if needed in the future. However, this shutdown requirement only applies to models that are in the possession of the original non-derivative developers. We deliberately crafted this aspect of the bill in a way that ensures that open source developers are able to comply.
SB 1047 provides significantly more clarity on liability than current law
The bill talks about “reasonable” testing, precautions, and safeguards. To a developer that may look like a vague and unclear standard, but the intention is exactly the opposite: SB 1047 clarifies the duties developers have relative to existing law.
Some of the commentary we’ve seen indicate that at least some people appear to believe a developer cannot currently be sued and that SB 1047 will create potential liability where there currently is none. That’s not the case.
The California Civil Code already requires AI developers (and everyone else) to take reasonable care not to cause death, physical harm, or property damage from any model they develop. That’s an extremely broad and vague standard, and recent scholarship has only underlined the lack of clarity for when AI developers could be liable for significant harm from their models. To be clear: If you develop a model *today* of any size and that model causes harm of any scale, someone can try to sue you and, if they prove their case, potentially recover damages. Obviously every case is different, so you may or may not be found liable. But under California tort law — and the tort law of any state — someone can certainly file a lawsuit and attempt to prove liability. This area of the law is new and unsettled, and I’m sure in coming years, courts will provide more clarity about the scope of a developer’s legal responsibilities and liability under existing law. But the reality is that people have the ability *today* — without SB 1047 — to file lawsuits against model developers and to seek to prove liability and damages.
SB 1047, by providing much more specificity than existing tort law, helps clarify what reasonable care requires in this unique area. At the same time, the bill doesn’t impose specific, detailed requirements that might quickly go out of date or hamper innovation. Instead, it looks to industry-developed safety practices, voluntary standards being developed by NIST, and other such sources of guidance.
That clarity will be critically important in supporting the innovators building this new wave of transformative technology. No one is served by rules that are vague and impossible to know in advance, and we’ve worked to craft rules that are significantly clearer than those in effect today.
Enforcement is very narrow in SB 1047
Under SB 1047, only the Attorney General will be able to file a lawsuit. Members of the public and their private attorneys will not be able to sue under this proposal. In my experience, when enforcement is centralized in the Attorney General, they utilize the power sparingly and only against the most extreme violators of the law. That’s because the Attorney General has limited resources and has to carefully choose when to file lawsuits.
The Attorney General will be able to seek damages on behalf of the public if a frontier model causes catastrophic harm and the developer failed to take reasonable precautions. The Attorney General can also seek civil penalties, which are extremely modest except in the case of repeated, flagrant, malicious behavior.
Under the bill, the Attorney General can also prevent a company from developing or releasing a model under very limited circumstances. The Attorney General would have to prove that a company’s behavior posed an imminent threat to public safety — that’s a very high bar and would apply in the truly rarest of circumstances.
There is no scenario where developers are criminalized for doing their jobs, as some have inaccurately claimed. The bill’s only criminal penalties are for perjury, which means intentionally lying to the government. No one gets convicted of perjury for honest mistakes. Criminal penalties for perjury are common across a wide array of industries, and they’re appropriate here because of the extreme seriousness of the public safety risks the bill addresses. No one should be deliberately lying to the government about matters of public safety.
The reason these enforcement mechanisms are so narrow is that we want innovation to proceed without the government intruding too closely into the work of California’s innovators. The enforcement mechanisms in the bill would only ever be activated if a company were to act incredibly irresponsibly, or if there were a real and imminent risk to public safety.
Open source is largely protected under the bill
For decades, open sourcing has been a critical driver of innovation and security in the software world. I’m committed to protecting the ability to open source in the vast majority of cases, while also grappling with the uncertainties of this unprecedented new technology. I’ll lay out how we’re currently approaching this complex issue, but if folks have ideas for how we can achieve these goals more effectively, we welcome those ideas.
Does the bill ban open source, as some have claimed? Absolutely not. Almost all open source model developers and application developers will face literally no requirements at all under SB 1047. No garage startups or teenagers tinkering with open model weights at home have to do anything at all under the bill. The only requirements fall on the developer training the original open-sourced model — and only if that model is very powerful, more powerful than any closed source or open sourced model that exists today.
Like closed source developers, open source developers training these extremely powerful models must be able to shut down their models — but only while those models remain in their possession, not after they have been open sourced. This “shutdown” provision is more friendly to open source developers than closed source developers.
Like closed source developers, open source developers training models more powerful than any released today will have to test their models for a handful of extremely hazardous capabilities. If a developer finds such a capability in its model, the developer must install reasonable safeguards to make sure it doesn’t cause mass harm. But thankfully we have a bit of time. Even the most powerful models today don’t have these hazardous capabilities. The next generation of frontier AI models probably won’t either. Maybe frontier AI models never will, in which case developers will have minimal obligations under this bill beyond testing.
SB 1047 has other pro-innovation and pro-safety provisions
SB 1047 creates CalCompute, a public cloud computing cluster that gives startups and academic researchers low-cost access to the compute needed to create radical new AI innovations. CalCompute will be funded by a combination of public and private dollars.
We’ve also built in some pricing transparency provisions to promote competition in the space. The overall innovation ecosystem is served when companies get competitive access to fundamental technologies, and we hope these requirements will unleash a wave of fast-growing, innovative companies.
In addition, to minimize the risk of malicious actors gaining access to this powerful technology through deceptive means, we’re requiring that customers of these large developers disclose certain information to the cloud provider. These requirements are also known as “know your customer” (KYC) requirements and apply to the cloud providers developers rely on to train their models.
We want your help to get this right
My staff and I have put countless hours of stakeholder engagement into SB 1047 thus far, and we will continue to do so until we have a standard we’re confident ameliorates risks while protecting responsible innovation.
In the coming weeks, we’ll be organizing a town hall to discuss the bill further with folks in the AI community. You can also reach out to my staff anytime.
Sincerely,
Scott Wiener
Senator, 11th District