California Senate Bill 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models” Act

The rapid growth of artificial intelligence (AI) has prompted governments around the world to grapple with how best to regulate the technology and provide a balance between safety and innovation. As AI systems become increasingly integrated into various industries—ranging from healthcare and finance to law enforcement and education—the need for regulatory frameworks that ensure safety, fairness, and accountability are urgent.

Globally, approaches to AI regulation vary significantly. In the United States, federal efforts to regulate AI have been limited, with most initiatives focusing on voluntary guidelines and ethical principles rather than binding legislation. The U.S. government has largely relied on frameworks like the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, which encourages companies to adopt best practices for AI development without mandating compliance. However, at the state level, initiatives such as California's Senate Bill 1047 (SB 1047) represent more aggressive attempts to regulate AI, especially when it comes to powerful, large-scale AI models. This bill, for instance, introduces stringent requirements for the safety testing and accountability of advanced AI systems, sparking debate over its potential impact on innovation and the tech industry.

In contrast, the European Union has taken a more comprehensive and proactive approach with the introduction of the EU AI Act, which is set to become the world’s first significant legislation regulating AI. The EU AI Act categorizes AI systems based on their risk levels, from minimal to unacceptable, and imposes varying degrees of regulation accordingly. High-risk AI systems, such as those used in critical infrastructure, education, and employment, are subject to stringent requirements, including transparency, data quality, and human oversight. The act also outright bans certain AI practices deemed too dangerous, such as social scoring by governments.

The EU’s approach reflects a precautionary principle, aiming to protect citizens from the potential harms of AI while fostering innovation within a well-defined legal framework. This contrasts with the more fragmented regulatory landscape in the U.S., where a patchwork of state laws and federal guidelines creates a complex environment for companies operating across multiple jurisdictions.

In Asia, countries like China are also moving quickly to regulate AI, though with a different focus. China’s regulations are often centered on state control and national security, emphasizing the government’s role in overseeing AI development and use. For instance, China’s new AI guidelines require companies to undergo security reviews before deploying AI systems and to ensure that these systems align with the country’s core socialist values.

These global variations in AI regulation highlight the challenge of balancing innovation with the need for oversight. As AI continues to evolve, so too will the regulatory approaches of governments around the world, with significant implications for the future of technology and society. The ongoing debates in places like California, the EU, and China underscore the complexity of crafting regulations that protect the public without stifling technological progress

California Senate Bill 1047

California Senate Bill 1047 (SB 1047), titled the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," is aimed at regulating advanced AI models in the state. The bill primarily targets "frontier AI models," which are defined as highly advanced AI systems requiring significant computational power and financial resources to develop. Here’s a summary of its key provisions:

  • Scope and Definitions:

    • The bill applies to AI models with a computing power exceeding 102610^{26}1026 floating-point operations and a development cost of over $100 million. It covers models that are significantly more powerful than current AI systems​.

  • Safety and Accountability Requirements:

    • Developers of covered AI models must implement rigorous safety testing and independent third-party evaluations to assess and mitigate potential harms.

    • Detailed reports on the design, functionality, and intended use of these AI systems must be submitted to a newly established regulatory body​.

  • Employee Protections:

    • The bill introduces protections for employees, allowing them to report safety concerns related to AI models without fear of retaliation. These concerns can be reported directly to the California Attorney General or the Labor Commissioner​.

  • Regulatory Oversight:

    • SB 1047 establishes the Frontier Model Division within the California Government Operations Agency to oversee compliance, provide guidance, and update regulatory thresholds as AI technology evolves. This division will be responsible for revising the definition of "covered models" annually.

  • Provisions for Open-Source and Smaller Developers:

    • The bill sets a threshold to protect smaller developers, specifically those fine-tuning open-source models. Only models fine-tuned with costs exceeding $10 million are subject to the bill’s requirements, thereby excluding most small-scale operations​.

  • Industry Impact and Opposition:

    • While the bill has garnered support from AI safety advocates, it has faced opposition from major tech companies and industry groups, who argue that it could stifle innovation, particularly in the open-source community. The concern is that developers could be held liable for unintended harms caused by modifications to their models by third parties​.

SB 1047 provides a proactive approach to AI regulation, balancing innovation with public safety. The bill sets a precedent that could influence AI legislation across other states and at the federal level.

Tech Companies Object

Tech companies like OpenAI and Meta have raised several key objections to SB 1047, focusing on the potential negative impacts on innovation, particularly within the open-source AI community. Here are the main concerns:

  • Impact on Open-Source Development:

    • Liability Concerns: Tech companies worry that the bill could hold developers legally responsible for any harmful outcomes resulting from modifications made to open-source AI models by third parties. Since open-source models are often adjusted and fine-tuned by external developers, companies like Meta argue that this could discourage the release of such models, stifling innovation and collaboration within the open-source community​.

  • Stifling Innovation:

    • Regulatory Burdens: The bill introduces stringent regulatory requirements, including safety testing, third-party evaluations, and detailed reporting obligations. Companies like OpenAI and Meta are concerned that these requirements could impose significant financial and operational burdens, particularly on smaller developers or startups, thereby stifling innovation and slowing down the development of advanced AI technologies.

    • Thresholds for Regulation: The bill's thresholds for what constitutes a "covered model" (e.g., models requiring over $100 million to develop) have raised concerns about future revisions that could lower these thresholds, potentially broadening the scope of regulation to include more companies over time.

  • Conflict with Federal Regulations:

    • Regulatory Overlap: There is concern that state-level regulations like SB 1047 could conflict with or duplicate future federal regulations, leading to a fragmented regulatory landscape. This could create additional compliance challenges for companies operating across multiple states or at a national level.

  • Potential Chilling Effect:

    • Unintended Consequences: Critics, including those from OpenAI and Meta, argue that the bill could have a chilling effect on AI development by creating legal and regulatory risks that may discourage companies from pursuing certain types of research or from releasing innovative AI models to the public.

These objections highlight the tension between the need for AI safety and the desire to maintain an environment conducive to innovation and technological advancement.

Concerns from the US Congress  

Several U.S. Congressional Democrats, including Representatives Ro Khanna, Zoe Lofgren, and Nancy Pelosi, expressed significant concerns about SB 1047 in a letter to Governor Gavin Newsom, urging him to veto the bill. Their objections align with those raised by major tech companies like OpenAI and Meta and focus on the following key points:

  • Potential to Stifle Innovation: The Congressional representatives argue that SB 1047 could create an uncertain legal environment that might drive AI developers out of California. This concern is especially acute for companies involved in developing open-source AI models, which could be negatively impacted by the bill’s liability provisions. The letter suggests that this could lead to a chilling effect on innovation, as companies might hesitate to release or fine-tune AI models due to the fear of being held legally responsible for unintended harmful outcomes.

  • Economic Risks with Limited Public Safety Benefits: The letter points out that the bill might impose unnecessary risks on California's economy with minimal public safety benefits. The representatives believe that the bill focuses too much on extreme, hypothetical risks associated with advanced AI models, while not adequately addressing more immediate concerns such as the spread of deepfakes and disinformation.

  • Preference for Federal Regulation: The Congressional Democrats advocate for a federal approach to AI regulation rather than state-level legislation, arguing that AI is a national issue requiring consistent, nationwide regulation. They express concern that California's bill could conflict with future federal laws and create a fragmented regulatory environment.

  • Impact on Open-Source AI: There is a specific concern that SB 1047 could harm the development of open-source AI models. OpenAI and Meta, in particular, have argued that the bill could force companies to either limit their involvement with open-source projects or exit the state altogether, due to the legal risks associated with the misuse of these models by third parties.

These objections reflect a broader concern that while AI needs regulation, the approach taken by SB 1047 might be too heavy-handed and could hinder the growth of AI innovation in California, a critical hub for technology development.

Michael Fauscette

Michael is an experienced high-tech leader, board chairman, software industry analyst and podcast host. He is a thought leader and published author on emerging trends in business software, artificial intelligence (AI), generative AI, digital first and customer experience strategies and technology. As a senior market researcher and leader Michael has deep experience in business software market research, starting new tech businesses and go-to-market models in large and small software companies.

Currently Michael is the Founder, CEO and Chief Analyst at Arion Research, a global cloud advisory firm; and an advisor to G2, Board Chairman at LocatorX and board member and fractional chief strategy officer for SpotLogic. Formerly the chief research officer at G2, he was responsible for helping software and services buyers use the crowdsourced insights, data, and community in the G2 marketplace. Prior to joining G2, Mr. Fauscette led IDC’s worldwide enterprise software application research group for almost ten years. He also held executive roles with seven software vendors including Autodesk, Inc. and PeopleSoft, Inc. and five technology startups.

Follow me @ www.twitter.com/mfauscette

www.linkedin.com/mfauscette

https://arionresearch.com
Previous
Previous

Generative AI Use Cases Infographic

Next
Next

Assessing the Impact of AI Projects