Aug 11, 2024

Honorable Zoe Lofgren
United States House of Representatives
1401 Longworth House Office Building
Washington, DC 20515

RE: Response to Your Letter Regarding California Senate Bill 1047

Dear Representative Lofgren:

I hope this letter finds you well. I am writing in response to your recent correspondence concerning California Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. SB 1047’s goal is to promote innovation in the incredibly exciting AI space and also to promote public safety. The two goals are not mutually exclusive. Indeed, they are complementary.

While I deeply respect your expertise and appreciate your engagement on this critical issue, I must respectfully disagree with several claims made in your letter, some of which are factually inaccurate.

While the two of us have not had the opportunity to discuss your concerns directly — my staff’s receipt of your letter was the first I heard of your concerns — I would be delighted to discuss the bill with you at any time. My door is always open. Nevertheless, I hope this response can address your concerns, ensure we are talking about the same bill, and open a constructive dialogue between us on this important matter.

Let me begin by addressing some of the specific points raised in your letter:

1. SB 1047’s Focus on Safety

Your letter criticizes the bill’s focus on safety concerns, or as you put it “addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, non-consensual deepfakes, environmental impacts, and workforce displacement.”

I dispute this characterization of the risks SB 1047 aims to address. The risks on which SB 1047 focuses are well-documented and acknowledged by the developers of our most advanced frontier models. Certain opponents of SB 1047 regularly dismiss bill supporters as “doomers,” who are focused on “existential” “far-fetched” “science fiction” threats a-la “Terminator.” Whatever the debate about “Terminator” kind of existential risks from AI, those risks are not the focus of this legislation. SB 1047 addresses severe and very tangible risks to national security and public safety, such as advanced AI models' ability to enable the creation of novel pathogens, facilitate massive cyberattacks that cause shutdowns of the electric grid or banking system, and so forth. These are the same types of risks discussed in President Biden’s AI executive order, and concern about these risks is widespread among experts in the national security community. As former national security advisor Susan Rice recently told the New York Times “We’ve never had a circumstance in which the most dangerous, and most impactful, technology resides entirely in the private sector... It can’t be that technology companies in Silicon Valley decide the fate of our national security and maybe the fate of the world without constraint.”1

The bill addresses hazardous capabilities, including the ability to create a chemical, biological, radiological, or nuclear (CBRN) weapon. Recent reporting indicates that national security experts have shown that today’s advanced AI models are rapidly approaching a dangerous level of capability in building biological weapons.2 One leading expert who intensively tested one of these models stated “these tools had gone from absolute crap a year ago to being quite good.” Kevin Esvelt, a biologist and Associate Professor at the Massachusetts Institute of Technology said he expects that advanced AI models will soon be able to create novel pathogens, and that “today we cannot defend against those things.” OpenAI3 and Anthropic4 have both identified CBRN capabilities as a potential threat posed by their models. The National Institute for Standards in Technology (NIST) identifies CBRN risk as an area of concern5 and recommends developers thoroughly test and monitor AI systems for these capabilities as they advance. Given the substantial evidence of these risks and the catastrophic (though not necessarily existential) harms they enable, it is appropriate for lawmakers to proceed with caution.

I also respectfully reject the characterization that SB 1047 focuses on future risks at the expense of more immediate concerns. The confusion here may lie in the different ways our two bodies approach the legislative process. While it is common in Congress for legislative proposals on a similar topic to pass together in an omnibus package, in Sacramento we typically consider legislative proposals one by one, weighing each on its merits. It is rarely our expectation that an individual bill will single-handedly resolve every aspect of a problem.

Since you are taking an interest in AI legislation before the California Legislature, I will note some of the legislation we are considering this session, thanks to the excellent work my colleagues have done (with my support) to address concerns around discrimination, non consensual deep fakes, and misinformation from the misapplication of AI:

  • AB 1836 (Bauer-Kahan) - Creates a civil cause of action against a person who, without authorization, produces, distributes, or makes available a digital replica of a deceased personality’s voice or likeness, unless the use falls into a specified exception.

  • AB 1856 (Ta) - Expands the crime of “revenge porn” to include intentional distribution of a deepfake of the intimate body parts of an identifiable person or a deepfake of the person engaged in sexual acts, as specified.

  •  AB 2355 (Carillo) - Requires a political advertisement that is generated in whole or in part using AI to include a disclosure.

  •  AB 2655 (Berman) - Requires large online platforms, as defined, to block the posting or sending of materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election.

  • AB 2839 (Pellerin) - Prohibits the distribution of campaign advertisements and other election communications that contain media that has been digitally altered in a deceptive way, except as specified.

  • AB 2930 (Bauer-Kahan) Requires developers and users of Automated decision tools to conduct and record an impact assessment including the intended use, the makeup of the data, and the rigor of the statistical analysis.

  • AB 3211 (Wicks) - Requires a conversational AI system (like chatbots) to clearly and prominently disclose to users that the conversational AI system receives synthetic content.

  • SB 933 (Wahab) - Specifies that computer-generated images, for purposes of statutes that criminalize child pornography, include images generated through the use of artificial intelligence.

  • SB 942 (Becker) - Places obligations on businesses that provide generative artificial intelligence systems to develop and make accessible tools to detect whether specified content was generated by those systems (watermarking).

We began this legislative session with over 50 AI-related bills, and as of now there are some 30 still proceeding through our legislative process. Please note that a number of the same technology companies opposing SB 1047 also oppose some or all of these bills. If you are interested, my office would be happy to provide a complete list of pending AI legislation.

2. The Role of State Legislation

While I deeply respect Congress’s powerful role in addressing technology policy, the reality is that federal action on technology regulation has been limited at best. With the exception of banning TikTok, Congress has not passed major technology legislation since computers used floppy disks to share data. Here we are in 2024, and Congress has not passed data privacy legislation, legislation to protect people from social media harms (other than banning TikTok), or legislation enshrining net neutrality in law. I know that you and others are working hard on these issues — and I deeply admire your work in the face of massive institutional headwinds — but Congress has yet to act on critical technology issues.

In the absence of Congressional action, California has repeatedly stepped in to legislate on crucial technology policy areas, including data privacy, social media regulation, and net neutrality. Given the rapid pace of AI’s advancement and the serious consequences to the public’s safety and wellbeing if the risks are realized, we have a duty to use our authority to safeguard the public.

3. Timing and Development of SB 1047

Contrary to your somewhat puzzling claim that SB 1047 is “moving quickly,” we have undertaken a lengthy, deliberate, and thorough process in crafting this legislation. We announced the bill nearly a year ago, publishing a detailed outline in September 2023,6 and introduced the full bill six months ago.7 The bill then advanced through our usual legislative process, where it has passed (by wide margins, at times with bipartisan support) four policy committees, one fiscal committee, and one floor vote. Throughout the past year, we have worked diligently with a broad array of stakeholders, including startups, large tech companies, investors, academics, national security experts, and others. We have made significant amendments to the bill based on this feedback, including multiple substantial changes in direct response to concerns expressed by the open source community.

4. Scope and Application of the Bill

SB 1047 is narrowly focused on the largest AI models and does not apply to startups or smaller companies. The bill only covers models that cost over $100 million to train (or derivative models that have been overhauled at significant expense), a threshold that excludes the vast majority of AI developers. This focus on the largest models is intentional, as these are the systems with the greatest potential for both benefit and harm.

The requirements of the bill are also well-suited to the current early stage of the technology’s development. Rather than taking an overly invasive approach by requiring licensing or pre-clearance from a government entity before developing or deploying an advanced AI model (an approach recommended by Sam Altman8), or an overly prescriptive approach by requiring developers abide by a rigid set of requirements, the bill’s requirements are modest and straightforward.

Developers of the largest models must test their models for the capability to cause catastrophic harm. If they identify such capabilities, they must take reasonable steps to mitigate them. These requirements are reasonable, and comport with commitments the largest AI developers have already made to the White House and in Seoul, South Korea.

While NIST has not yet issued final guidance, it recently issued a detailed draft discussing best practices for risks from dual-use foundation models.9 These commitments in Seoul and this draft framework from NIST push back on the idea in your letter that the standards SB 1047 refers to “do not yet exist.” SB 1047 recognizes this field is still evolving, and by referring to standards issued by NIST and industry best practices, the bill’s framework can robustly evolve over time rather than become outdated. If we insist on waiting for standards to become completely clear before enacting legislation to protect the public, the result will be to never enact any legislation at all.

5. "Kill Switch" Provision

Your letter inaccurately describes the bill's emergency shutdown provision, often mistakenly referred to as a "kill switch." Contrary to your claim, SB 1047 only requires a model developer to be able to shut down a model if the model is in the developer's possession. The bill is clear that once the model is not in the developer's possession — for example, after it has been open-sourced — the developer is no longer responsible for being able to shut down the model. This provision has been carefully crafted to balance safety concerns with the need to support open-source development.

6. Impact on Innovation and California's Economy

I respectfully — and strongly — disagree with your assertion that SB 1047 will cause tech companies to leave California. We have heard similar claims whenever California enacts regulations to protect the public, including when we passed our state’s data privacy law. These predictions have consistently proven false, and California remains the global epicenter of technological innovation.

Moreover, SB 1047's safety obligations are not triggered by a developer being located in California but rather by doing business in California. Developing a model outside of California does not avoid the bill's requirements unless a company does not do any business at all in the fifth largest economy in the world. This approach ensures that the bill does not unfairly disadvantage California-based companies, while still providing necessary protections for our residents.

7. Promoting Both Innovation and Safety

The heart of SB 1047 is to require that the largest AI labs perform the safety evaluations that they have repeatedly committed to perform. The public should not have to rely on voluntary, unenforceable industry commitments to protect public health and safety. History has shown that this approach rarely serves society well, including in the technology sector.

I firmly believe that innovation and safety are not mutually exclusive. SB 1047 attempts to realize that belief by establishing CalCompute, a public cloud compute cluster, alongside the bill’s safety provisions. We can and must pursue both innovation and safety simultaneously. AI has tremendous potential to make the world a better place and solve some of the hardest problems we confront. However, as with any powerful technology, there are risks that we must address proactively. These include concerns about deep fakes, disinformation, algorithmic discrimination, harm to critical infrastructure, and threats to public safety.

***

I want to emphasize that SB 1047 is the product of extensive consultation, careful consideration, and a commitment to fostering both innovation and safety in AI development. As has been the case for the past year, I remain open to constructive feedback on the bill. My goal has always been, and remains, to get this right.

Thank you, and please do not hesitate to reach out to me to discuss further.

Sincerely,

Scott Wiener

Senator, 11th District

1 Nicholas Kristof “A.I. May Save Us or May Construct Viruses to Kill Us,” New York Times, July 27th 2024, https://www.nytimes.com/2024/07/27/opinion/ai-advances-risks.html

2 Riley Griffin, “AI-Made Bioweapons Are Washington’s Latest Security Obsession,” Bloomberg, August 2, 2024, https://www.bloomberg.com/news/features/2024-08-02/national-security-threat-from-ai-made-bioweapons-grips-us- government

3 OpenAI, “GPT-4 System Card,” March 23, 2023, https://cdn.openai.com/papers/gpt-4-system-card.pdf

4 Dario Amodei, “Written Testimony before the Judiciary Committee Subcommittee on Privacy, Technology, and the Law,” July 25th 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf

5 NIST, “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” July 2024, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

6 Billy Perrigo, “Exclusive: California Bill Proposes Regulating AI at State Level,” September 13, 2023,
https://time.com/6313588/california-ai-regulation-bill/

7 Gerrit De Vynck and Cat Zakrzewski, “In Big Tech’s backyard, California lawmaker unveils landmark AI bill,” February 8, 2024 https://www.washingtonpost.com/technology/2024/02/08/california-legislation-artificial-intelligence-regulation/

8 Sam Altman, “Written Testimony Before the U.S. Senate Committee on the Judiciary,” May 15, 2023 https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altm an.pdf

9 NIST, “Managing Misuse Risk for Dual-Use Foundation Models,” July 2024 https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-1.ipd.pdf