What does SB 1047 do and why is it needed?

Highlights from SB 1047

  • Covered models

    SB 1047 only applies to AI models larger than any in existence today that cost over $100M to train:
    • Models with computing power over 10^26 FLOP (floating-point operations), or other models with similar capabilities
    • The vast majority of startups are not covered by the bill

    The bill only addresses extreme risks from these models:
    • Cyberattacks causing over $500 million in damage
    • Autonomous crime causing $500M in damage
    • Creation of a chemical, biological, radiological, or nuclear weapon using AI

  • Requirements for developers

    Under SB 1047, developers test for risks and adopt precautions for models assessed to be risky:
    Before training: developers adopt cybersecurity precautions, implement shutdown ability, and report safety protocols.
    Before deploying: developers implement reasonable safeguards to prevent societal-scale catastrophes.
    After deploying: developers monitor safety incidents and monitor continued compliance.
    • Developers of derivative or low-risk models only have simple reporting requirements.

  • Enforcement

    The provisions in SB 1047 are enforced in the following ways:
    • Whistleblower protections are provided to employees at frontier labs to ensure that information on compliance is readily available.
    • Civil suits can be brought by the Attorney General against developers who cause catastrophic harm or threaten public safety by neglecting the requirements.

  • CalCompute

    SB 1047 creates a new CalCompute research cluster to support academic research on AI and the startup ecosystem, inspired by federal work on the National Artificial Intelligence Research Resource Pilot (NAIRR).

  • Open-source advisory council

    SB 1047 establishes a new advisory council to advocate for and support open-source AI development in California.

  • Transparent pricing

    SB 1047 requires cloud computing providers and frontier model developers to provide fair and transparent pricing, to avoid price discrimination impeding competition in California.

FAQs

  • California has become a vibrant hub for artificial intelligence. Universities, startups, and technology companies are using Al to accelerate drug discovery, coordinate wildfire responses, optimize energy consumption, uncover rare minerals that produce clean energy, and enhance creativity. Artificial intelligence has enormous potential to benefit our state and the world. California must act now to ensure that it remains at the forefront of dynamic innovation in AI development.

    At the same time, scientists, engineers, and business leaders at the cutting edge of this technology - including the three most cited machine learning researchers of all time - have repeatedly warned policymakers that failure to take appropriate precautions to prevent irresponsible AI development could have severe consequences for public safety and national security. California must ensure that the small handful of companies developing extremely powerful AI models — including companies explicitly aiming to develop “artificial general intelligence” — take reasonable care to prevent their models from causing very serious harms as they continue to produce models of greater and greater power.

  • The bill has two main components:

    Promoting responsible AI development: The bill defines a set of hazardous impacts the largest AI models could have, from cyberattacks to the development of biological weapons. It requires developers of these AI models to conduct self-assessments to ensure these outcomes will be prevented and empowers the Attorney General to take action against developers whose technology causes catastrophic harm or threatens public safety.

    Supports AI competition and innovation: The bill also promotes ongoing academic research on AI, creating CalCompute, a new state research cluster to support the AI startup ecosystem. The legislation creates a new open-source advisory council that will be tasked with advocating for and supporting safe and secure open-source AI development. The bill also promotes competition by requiring large-scale AI developers to provide fair and transparent pricing.

  • SB 1047 sets out clear standards for developers of AI models trained using a quantity of computing power greater than 10^26 floating-point operations (and other models with similar capabilities). These models, which would cost over $100,000,000 to train, would be substantially more powerful than any model that exists today.

    Specifically, SB 1047 clarifies that developers of these models must invest in basic precautions such as pre-deployment safety testing, red-teaming, cybersecurity, safeguards to prevent the misuse of dangerous capabilities, and post-deployment monitoring. Furthermore, developers of covered models must disclose the precautionary measures they have taken to the California Department of Technology. If the developer of an extremely powerful model causes severe harm to Californian citizens by behaving irresponsibly, or if the developer’s negligence poses an imminent threat to public safety, SB 1047 empowers the Attorney General of California to take appropriate enforcement action.

    SB 1047 also creates whistleblower protections for employees of frontier laboratories, and requires companies that provide cloud compute for frontier model training to institute “know your customer” policies to help prevent the dangerous misuse of AI systems.

  • SB 1047 is focused on models capable of causing extraordinary harms that involve the creation of weapons of mass destruction, or AI systems causing $500 million of damage through cyberattacks or autonomously executed criminal activity.

    These are extreme capabilities that models currently do not possess. It’s possible that models trained in the next couple of years will have these capabilities, and so developers need to start taking reasonable, narrowly targeted precautions when training the most advanced models.

  • SB 1047 helps ensure California remains the world leader in AI innovation, by establishing a process to create a public cloud-computing cluster that will conduct research into the safe and secure deployment of large-scale artificial intelligence (AI) models. The model will allow smaller startups, researchers, and community groups to participate in the development of large-scale AI systems, helping to align them with the needs of California communities.

    Additionally, to support the flourishing open-source ecosystem, SB 1047 creates a new advisory council to advocate for and support safe and secure open-source AI development.

    Finally, in order to ensure that smaller startup developers have equal opportunities to larger players, SB 1047 requires cloud-computing companies and frontier model developers to provide transparent pricing and avoid price discrimination.

  • Yes. SB 1047’s requirements only apply to an extremely small set of AI developers, making the largest, most cost-intensive models, today costing in excess of $100 million dollars. The vast majority of AI startups, and all AI application and use-case developers, would have little or no new duties under SB 1047.

  • Yes, similar safety testing and disclosure is already being done by many leading developers. The voluntary commitments to the White House and President Biden’s Executive Order call for similar actions. Indeed, many frontier AI developers have already put in place many of these precautions, such as capabilties testing, or developing Safety & Security protocols.

  • A developer can open-source any AI model covered by this bill so long as they conduct safety tests and reasonably determine that it doesn't have specific, highly hazardous capabilities. The author is actively working with developers to ensure these tests are minimally burdensome and continues to welcome input on how innovation can be fostered safely.

  • The bill requires developers of the largest AI models, which cost well over $100 million to train today, to conduct self-assessments to protect against potential risks and adopt a set of defined precautions. These steps include:

    Before training a model self-assessed to be risky: developers must adopt cybersecurity precautions, implement shutdown ability, follow guidance from the National Institute of Standards and Technology and standard-setting organizations, and report safety protocols.

    Before deploying a model self-assessed to be risky: developers must implement reasonable safeguards to prevent societal-scale catastrophes.

    After deploying a model self-assessed to be risky: developers must monitor/report safety incidents and monitor/report continued compliance.

  • No. Developers self-assess whether their models qualify for a “limited duty exemption,” and need not wait for approval from any government agency.