Setting the Record Straight: A Response to House Democrats on SB 1047

By Sunny Gandhi, VP Political Affairs, Encode Justice

Dear Speaker Emerita Pelosi and Representatives Lofgren, Eshoo, Khanna, Peters, Cáredenas, Bera, Barragán, and Correa,

Last week, you issued commentary—in the form of a letter to Governor Gavin Newsom and a press statement reiterating many of the same points—regarding SB 1047, the California AI safety bill which just cleared a final hurdle before heading to a floor vote later this month that Encode Justice is a co-sponsor for.

It is frankly surprising to witness members of Congress, mired in gridlock on federal AI regulation, attempt to obstruct state-level progress on such a critical matter. What's even more puzzling is the—intentional or unintentional—attempt to do so with arguments and inaccuracies that clearly echo talking points that are often presented by industry stakeholders who are resistant to regulation. California has chosen to take decisive action to foster responsible AI development with Washington facing legislative hurdles. If you, in your capacity as federal lawmakers, feel strongly that state initiatives are inappropriate, I urge you to redirect your energy to craft federal legislation that targets the risks that SB 1047 attempts to mitigate. The mechanism of federal preemption exists precisely for this purpose—Congress retains the ability to enact federal legislation that would override or expand state legislation. However, given both the rapid pace of technological development and the lack of federal action, it is not only appropriate but essential for states like California to create guardrails for responsible artificial intelligence.

This situation is, unfortunately, not unique to the AI space. Congressional action on technology issues has been woefully inadequate for decades. At the time of this letter, America still lacks comprehensive legislation for data privacy protections, social media harm mitigation, and net neutrality. The sole exception to Congress inaction—the TikTok ban—while significant, barely scratches the surface of the complex tech landscape we face today. While I acknowledge and respect that you and your colleagues have faced countless institutional hurdles and resistance, the fact remains that Congress has been unable to take decisive action on multiple fronts.

This inactivity has been starkly contrasted by the urgency expressed by those of us who will inherit the consequences of your decisions—or indecisions. Youth activists have been raising the alarm about the potential dangers of an unchecked technology industry for years. These consequences have been most evident in the realm of social media, where it has resulted in rampant misinformation, cratering mental health, and fundamental challenges to our democratic processes. With AI, the stakes are even higher. That’s why this time, we are calling for immediate and thoughtful action before the harms have occurred. As legislators, it is crucial that you do not impede those who are proactively working towards solutions and instead take meaningful steps to address these critical issues.

In this post, I address some of the major inaccuracies in the recent letter to set the record straight on what SB 1047 would—and would not—do, and how the bill would impact AI safety and innovation in California. If the bill passes the Assembly, I urge you to retract your opposition to the bill and veto recommendation to Governor Newsom.

The Science of Testing

At multiple points in the letter, the signatories make the same basic argument: we need to know more about AI safety and testing before taking action.

The science of AI safety is still evolving, but the idea that society should wait for perfect information about risk before mitigating risk simply does not hold water. This is perhaps especially the case as generative AI models become more and more powerful, only increasing the potential for causing critical harm. In fact, that is precisely why we need SB 1047—it will incentivize developers to drive both safety and innovation at the frontier of generative AI.

Knowing that models can behave in unexpected ways and that not all potential harms could be foreseen, SB 1047 simply requires that developers take “reasonable care” to test covered models for the ability to cause critical harm and take reasonable precautions to mitigate risks before release. The letter implies that these requirements are not yet possible—but importantly, these are things that many of the top labs have already agreed to do as part of the White House voluntary commitments.

And that is not even to mention claims in the letter that are just outright false. For example, the signatories claim that the bill would require developers to adhere to NIST guidance around testing that does not yet exist. This is incorrect on two counts: (1) the bill would not make this a requirement and (2) NIST released guidance in July 2024, which provides detailed recommendations on managing risk for foundation models.

Open Source

At a high level, the letter suggests SB 1047 would slow down innovation within the open source ecosystem. This is inaccurate.

One of the goals of SB 1047 is to protect startups and the open source ecosystem. To that end, the legislation includes a number of common-sense safety provisions designed to provide open source developers with additional clarity on what is—and is not—their responsibility.

For example, if an open source model is significantly fine-tuned by another developer with training costs in excess of $10 million, the responsibility for ensuring safety rests on the fine-tuner and not on the original developer. In addition, the emergency shutdown requirement only applies to models within the control of the original developer.

It also creates a public cloud computing cluster, CalCompute, which the signatories of the letter state that they support. The purpose of CalCompute is to ensure open source developers and academic researchers have access to the compute power they need to continue driving safety and innovation at the frontier of generative AI.

Impact on California

The signatories also claim that “there is real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California.” Others have made this claim before, but it is entirely unsubstantiated by the history of Silicon Valley.

California has by far the most robust AI innovation ecosystem in the world, at the intersection of people, capital, research, and more. No one should take that for granted, but there is no evidence that a light-touch regulatory regime aimed at improving safety would somehow undermine the strength and attractiveness of that ecosystem

In fact, AnnaLee Saxenian, a UC Berkeley professor and respected expert on regional economies and Silicon Valley in particular, has explained why (when referring to SB 1047) “the Silicon Valley ecosystem isn’t going away because of reasonable regulation that carries a small burden of compliance. A balanced regulatory regime for AI developers in California will reinforce Silicon Valley’s advantage over the rest of the world.”

One key thing to understand about SB 1047 is that a company cannot simply avoid this legislation by moving its headquarters to other jurisdictions—SB 1047 will apply to all companies doing business in California. The only way to avoid this regulation would be by ceasing to do business in the state all together. For reference, California has the fifth largest economy in the world. That is not a sensible business calculation, especially considering the relatively low estimated cost of complying with this bill. And let us not forget, SB 1047 would really only cover the most advanced models, being developed by billion- and trillion-dollar companies.

A Comprehensive Approach to Risk

Finally, the signatories of the letter create a false dichotomy between addressing short-term and long-term AI risk. What we really need is a “both/and” approach to risk mitigation.

For example, the letter calls out the need for legislation to address immediate AI risks, like deepfakes and misinformation. On this, we agree. We do need regulation to address these immediate risks, and they are correct that SB 1047 does not address them. Encode Justice operates on a philosophy of addressing both current and future harms. We continue to work on issues such as regulating non-consuentual pornographic deepfakes, remedying civil liberty violations, and setting standards on the use of AI in warfare.

SB 1047 is not—and should not be—the only AI regulation. While there is a need for legislation to address immediate harms, we also need to safeguard against the type of critical, societal-wide risks—exactly the risks SB 1047 would help to mitigate against.

To be clear: critical harms are no longer purely hypothetical, as the signatories suggest. Leading researchers say there are warning signs that the next generation of models may have new capacity to cause societal-level harms. Systems have already provided practical assistance with acquiring biological weapons. Another recent study found that GPT-4 could be successfully modified into a system with rudimentary web-hacking capabilities. The White House has grown alarmed over the current possibility of novel biological weapons. And these are just the models currently available. As models become more and more capable, we should expect these threats to only increase in scope and scale.

Scientists, engineers, and business leaders at the cutting edge of this technology have repeatedly warned policymakers that failure to take appropriate precautions to prevent irresponsible AI development could have severe consequences for public safety and national security. Two of the most cited AI scientists, Yoshua Bengio and Geoffrey Hinton, support SB 1047 and have urged Gov. Newsom to pass it. With all due respect to the signatories, we should first and foremost be listening to the experts who laid the very foundation for the field.

The public also supports this bill. Recent polling has found that 77% of likely California voters support SB 1047, with high levels of bipartisan support. This aligns with broader public concern around critical AI harms, as 86% of voters nationally believe AI could accidentally cause a catastrophic event. With broad support for the bill—especially among California democrats—your opposition to the bill lies in stark contrast to what your constituents actually want. Your job as elected officials rests on respecting what voters want.

We need a fact-based debate around SB 1047. But this recent letter has taken much of the same misinformation that has made logical debate difficult and amplified it at the Congressional level. At Encode Justice, we are proud to co-sponsor SB 1047—and our door is always open for lawmakers looking to learn more about where this first-in-the-nation effort to regulate AI actually stands.

Sincerely,

Sunny Gandhi
VP, Political Affairs
Encode Justice