July 1, 2024

Y Combinator
580 20th St
San Francisco, CA (94107)

Andreessen Horowitz
2865 Sand Hill Rd #101
Menlo Park, CA (94025)

RE: Response to inaccurate, inflammatory statements by Y Combinator & a16z regarding Senate Bill 1047

To the Leadership of Y Combinator and a16z:

I write in response to a series of recent statements (letters, podcasts, social media posts, a website) by your organizations about Senate Bill 1047 — statements you have promulgated to startup founders and the public at large. I deeply appreciate your firms’ engagement on the best way to regulate artificial intelligence (AI) for safety, while promoting robust innovation — the dual goals of SB 1047. Both Y Combinator and a16z are respected, talented organizations that occupy important spaces in the tech innovation ecosystem. I eagerly await detailed proposals from both of your organizations — which, to date, I have not received — for how you think the bill might be improved.

While I always appreciate engagement and constructive feedback on any bill I author, I also ask that those engaging be accurate in their characterization of those bills. Unfortunately, various statements by YC and a16z about SB 1047 are inaccurate, including some highly inflammatory distortions, e.g., that SB 1047 will send model developers to prison for failing to anticipate harms from their AI systems (false), that SB 1047 will effectively ban open source releases (false), and that SB 1047 will cause technology companies and startups to relocate to other states because only developers located in California will be covered (false).

Your technical expertise and practical experience developing AI is a welcome and necessary input, and I have demonstrated over and over my desire for constructive feedback and willingness to change SB 1047 in response to that feedback. But I must correct the record for startup founders and others who wish to offer thoughts on the legislation. As we go through this process, it is critically important that the debate about the bill be rooted in fact — i.e., what the bill actually does — not fiction or exaggeration.

As you know, I introduced SB 1047 in February of this year, after having released a detailed outline of the bill in September 2023 for the purpose of soliciting early feedback. SB 1047 is designed to promote safe and robust innovation and deployment of frontier artificial intelligence models. The bill is a focused, narrow, and light touch approach to AI safety. It does not require model developers to get permission or a license from the government to train or release a model. Nor does it ban models above a certain size.

Rather, SB 1047 requires developers of the largest models (costing more than $100 million to train, i.e., only large labs, not startups) to conduct a safety evaluation of the model before releasing it, in addition to other basic safety precautions (e.g., being able to shut down a model that is still in one’s possession). The large labs — the ones that would be covered by SB 1047 — have already publicly committed to perform this safety testing.

It is worth emphasizing that AI labs have already agreed to take safety measures in accordance with the White House Voluntary Commitments. SB 1047 clarifies what steps need to be taken by such developers and makes these and other important safety requirements a matter of law rather than merely voluntary commitments that may shift and change as companies change leadership and face new competitive pressures.

SB 1047 is the product of hundreds of conversations my team and I have had with a broad range of experts, including both supporters and critics and including startup founders, large tech companies, academics, open source advocates, and others.

San Francisco’s technology ecosystem leads the world, and both YC and a16z have played major roles in the creation of historically significant companies that provide extraordinary benefits to California and the rest of the world. I’m proud to represent this amazingly creative and innovative city. And I’m especially proud that San Francisco is leading the way on AI innovation. AI has so much potential to make the world a better place, and innovation is key to achieving that goal.

YC was among the very first stakeholders to which I reached out in 2023 for feedback on what would become the bill in print today. My goal then was the same as it is now: To produce and pass legislation that promotes safety among frontier model developers and fosters innovation in equal measure.

We are still awaiting YC’s detailed feedback on how SB 1047 might be improved to better promote the next generation of AI startups. I invite YC to offer a detailed proposal to improve the bill or otherwise address its aims, as protecting the startup ecosystem is a top priority for me. In fact, my commitment to avoid unintentionally sweeping in startups under the bill has already motivated several substantive amendments, including a requirement that a developer must spend at least $100 million training an AI model or use massive amounts of compute significantly finetuning a model in order to face any obligations under SB 1047.

While a16z’s engagement began later in the legislative process, I deeply appreciate many of the substantive points the firm and its founders have raised. I welcome good ideas from all sources and hope to see a substantive and detailed proposal from a16z as well, since we still have time to amend the bill to best reflect our shared goal of promoting responsible AI development.

We all benefit from objective and fact-based debate. I always welcome disagreements over policy: such disagreements are fundamental to crafting the strongest possible legislation that is reflective of multiple perspectives. However, disagreements over policy won’t lead to productive outcomes unless all parties approach the process showing respect for facts. For this reason, I find it necessary to clarify several of the inaccuracies conveyed in YC’s recent opposition letter to SB 1047 and a16z’s recent statements:

1. False claim that SB 1047 will send model developers to jail for failing to anticipate misuse

YC’s letter makes the categorically false — and, frankly, irresponsible — claim that, “creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software.” That is absolutely untrue. It’s a scare tactic designed to convey to founders that this bill will land them in jail if something goes awry with a model they build. Putting aside that the bill doesn’t apply to startups, perjury requires knowingly making a false statement under oath — an intentional lie, whether on a driver’s license application, a tax return, or many other statements to the government. Good faith mistakes are not perjury. Harms that result from a model are not perjury. Incorrect predictions about a model’s performance are not perjury. To suggest otherwise does nothing other than generate fear among developers about criminal exposure that simply does not exist in the bill.

Similarly, a16z’s statement that SB 1047 would “impose civil and in some cases criminal liability on model developers” is misleading in suggesting that startups are even covered by the bill — they are not — and once again states that developers will be subject to “criminal liability,” even though that liability is limited to people who intentionally lie to the government.

2. False claim that SB 1047 creates new liability for startups

YC and a16z both stress that SB 1047 creates potential liability for model developers. Putting aside that startups aren’t covered by the bill — only models that cost over $100 million to train are covered — YC and a16z fail to mention that model developers (large or small) can *currently* be sued under existing, longstanding tort law if their model causes or contributes to harm. That is the case not just in California but in most, if not all, states. That *existing* liability risk applies to *all* models, not just the huge ones covered by SB 1047. And that *existing* liability risk stems from any harmed party being able to sue (far broader than SB 1047). This is *existing* liability law, creating *existing* liability risk for all developers, which applies with or without SB 1047.

The liability created by SB 1047 is profoundly smaller and more narrow than existing law and provides much more clarity than existing law: It allows only the Attorney General to file a suit if and only if a developer of a covered model (more than $100 million to train) fails to perform a safety evaluation or take steps to mitigate catastrophic risk and if a catastrophe then occurs.

Telling startups that they’re subject to some sort of new, significant risk of liability is simply inaccurate and not based in reality.

3. False claim that SB 1047 will undermine innovation in California and cause companies to leave or start up elsewhere

YC’s opposition letter states “Non-Californian companies will be free from this burden as written, thus creating a massive incentive to move this innovation out of California.” A16z has made similar claims. These claims are simply inaccurate, given that SB 1047 is not limited to developers who build models in California; rather, it applies to any developer doing business in California, regardless of where they’re located.

For many years, anytime California regulates anything, including technology (e.g., California’s data privacy law) to protect health and safety, some insist that the regulation will end innovation and drive companies out of our state. It never works out that way; instead, California continues to grow as a powerful center of gravity in the tech sector and other sectors. California continues to lead on innovation despite claims that its robust data privacy protections, climate protections, and other regulations would change that. Indeed, after some in the tech sector proclaimed that San Francisco’s tech scene was over and that Miami and Austin were the new epicenters, the opposite proved to be true, and San Francisco quickly came roaring back. That happened even with California robustly regulating industry for public health and safety.

San Francisco and Silicon Valley continue to produce a deep and unique critical mass of technology innovation. Requiring large labs to conduct safety testing — something they’ve already committed to do — will not in any way undermine that critical mass or cause companies to locate elsewhere.

In addition, an AI lab cannot simply relocate outside of California and avoid SB 1047’s safety requirements, because compliance with SB 1047 is not triggered by where a company is headquartered. Rather, the bill applies when a model developer is doing business in California, regardless of where the developer is headquartered — the same way that California’s data privacy laws work. So unless a lab is going to stop doing business in California — the fifth largest economy in the world and the global epicenter of the tech sector — the lab is covered if it creates a large model that costs more than $100 million to train. This is a key reason why the claim that companies will flee California or start elsewhere due to this law is not reality.

To put it more simply: When it comes to producing the forefront of technology innovation, Miami and Austin are not a thing. San Francisco and Silicon Valley, by contrast, are a big thing, and that’s not going to change due to a requirement for the largest mega-labs to perform safety testing that they’ve already committed to doing.

4. Inaccurate claims about specific provisions in the bill

Claim that SB 1047 will somehow cover non-frontier AI models

YC claims that as written, “a ‘frontier model’ designation could plausibly apply to existing software, like Google's search algorithm or any social media recommendation algorithm,” and that, “courts could easily interpret SB 1047 to apply to even pedestrian software.”

To clarify, the term “frontier model” is not a term used in SB 1047. The text of SB 1047 says that an “artificial intelligence model’ is an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” This definition follows a similar definition in President Biden’s Executive Order and has been agreed upon by both Houses of the Legislature in close consultation with officials from the European Union to ensure harmonization with the EU AI Act’s definition. With this definition of an AI model — coupled with the very clear 10^26 FLOP and $100 million training thresholds, which target only the largest and most powerful models — a court would not “easily interpret SB 1047 to apply to even pedestrian software.” Not even close.

Claim that SB 1047 is vague in some respects

YC’s letter suggests that terms used in SB 1047 will be nothing but “fodder for judges” to interpret loosely and in a way that negatively impacts the tech industry. We’ve worked very hard to craft the bill’s language as tightly and clearly as possible. As with any legislation, language can always be improved. It’s for that exact reason that, for months now, I’ve sought constructive feedback from YC and a16z. If you or anyone else believes that particular language in the bill isn’t clear enough, by all means tell me and suggest alternative language. I continue to have an open door.

Claim that the triggering threshold for models is arbitrary

YC’s opposition letter states that establishing 10^26 FLOPs as a regulatory threshold is problematic because technology is still evolving and this metric may not adequately capture the capabilities or risks associated with future models. I can understand this concern and appreciate you raising it.

The 10^26 FLOP threshold is a clear way to exclude from safety testing requirements many models that we know, based on current evidence, lack the ability to cause critical harm. Current publicly released models – none of which are trained on 10^26 FLOP – have been tested for highly hazardous capabilities and would not be covered by the bill. By setting this threshold, which is also used in the White House Executive Order, we can reduce regulatory burden by excluding models that are at or below the current FLOP level for training.

To make sure SB 1047 remains focused on only the largest and most powerful models, the bill also specifies that  a “covered model” is a model that is trained on 10^26 FLOPs or more AND that costs at least $100 million (adjusted for inflation) to train. 

To this end, recent amendments to the bill give the Frontier Model Division the authority to change the 10^26 FLOP threshold after 2027, taking into account the latest science and input from academic researchers, the industry, and members of the open source community. A more flexible compute threshold will allow SB 1047 to adapt as we learn more about AI safety and risk.

Claim that the “kill switch” and safety evaluation requirements “could function as a de facto ban on open-source AI development”

I strongly support open source AI development. Open source plays a critical role in democratizing AI innovation and allowing innovators to create the next great improvement for humanity. SB 1047 does nothing to stifle the power of open source model development, and the bill has been written and further amended to offer protections for open source.

The phrase ‘kill switch’ is evocative but misleading. SB 1047 includes an emergency shutdown provision that only applies to models within the control of the developer. This requirement does not include open source models over which the developer has no control. This was always my intent, and we believe the bill has never required the original developer to retain shutdown capabilities over derivative models no longer in their control. However, we amended the bill recently to make crystal clear that the shutdown requirement does not apply once a model has left a developer’s possession.

SB 1047 also explicitly recognizes the importance that open-source models play in the economy and AI ecosystem. To support the flourishing open-source ecosystem, SB 1047 creates a new advisory council to advocate for and support safe and secure open-source AI development. Moreover, recent amendments mandate that the board overseeing the Frontier Model Division specifically have a position for a member of the open-source community and that the Division’s regulatory processes solicit input from members of the open-source community.

California’s technology sector would not be what it is without a robust open source community, and my goal has always been to support open source. I continue to welcome feedback on how best to achieve that objective.

Conclusion

Neither YC nor a16z has provided a realistic alternative to addressing the risks identified by SB 1047 — risks that leaders in the field like the godfathers of AI, Yoshua Bengio and Geoffrey Hinton, have urged policymakers to address. (Bengio and Hinton have specifically endorsed SB 1047). Neither firm has indicated in what manner, if not SB 1047’s approach, we should reduce the risk of a large AI model creating a risk of catastrophic harm.

YC’s letter offers the summary thought that a more “balanced approach” that “protects society from potential harm while fostering an environment conducive to technological advancement that is not more burdensome than other technologies have previously enjoyed,” is needed. This is precisely what SB 1047 does and is intended to do: foster both innovation and safety. The bill protects and encourages innovation by reducing the risk of critical harms to society that would also place in jeopardy public trust in emerging technology. A collapse in public trust could constrain or perhaps even negate the industry’s license to experiment, which would be a very bad result for humanity.

I remain open to work together toward the goals YC, a16z, and their founders outline regarding innovation and responsible development in California. But it is my firm belief that the time to act on smart, pro-innovation regulation is now. 

As always, please do not hesitate to reach out to me directly.

Sincerely

Scott Wiener
Senator, 11th District