The Artificial Intelligence (AI) industry has soared in the past decade with an expected market size of $1339 billion by 2030. The Silicon Valley, located in the San Francisco Bay Area, is the epicenter of technological research and innovation, making California the principal state for AI development — and the most in need of AI safety regulation. Yet, many are wary of the rapid pace of AI development with approximately 75 percent of AI consumers concerned about misinformation and 77 percent worried about job loss. Although convenient and entertaining, unregulated AI can pose serious threats to society as a whole.
On September 29th, California Governor Gavin Newsom vetoed the highly contentious Senate Bill-1047 (SB-1047), also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This bill sought to enforce safety and security measures around AI development.
Requirements for Developers
Senate Bill-1047 requires developers to follow a number of regulatory standards. For instance, developers must demonstrate the ability to promptly enact a full shutdown and have a written safety and security protocol before training an AI model. The bill also requires full transparency of developers’ safety and security protocols, including any revisions and updates made. Developers must also acquire annual third-party audits and state certifications. These protocols, audits, and certifications must be readily accessible to the Attorney General.
SB-1047 also provides whistleblower safety nets that protect employees who disclose concerns regarding a developer about if an AI poses any unreasonable risk of critical harm. This is crucial for holding firms accountable.
The Attorney General would then take civil action against violators of the bill, incurring fines of up to $10 million.
Opposition
Individuals against SB-1047 are those who believe that the bill will hinder AI innovation, such as former House Speaker Nancy Pelosi and San Francisco Mayor London Breed. Lobbyists from tech giants like Google and Meta and scholars are also continuously advocating for technological freedom.
Certain academics also criticize the bill’s affects on AI development. For instance, the professor of business technologies at Carnegie Mellon University, Zoey Jiang, believes that this bill will hinder smaller tech startups’ ability to innovate AI because they lack the data and computational power necessary to update their current models.
Although this is a possibility, AI development is charting into unknown territory with certain complexities and risks that can be catastrophic if not regulated. Proactive measures need to be taken to ensure the safety of consumers. Senate Bill-1047 is a step towards that.
Support
Over 77 percent of Californian voters support SB-1047. More specifically, the support for the bill is bipartisan with 68 percent of Democrats, 53 percent of Republicans, and 58 percent of independent voters agreeing with proponents of SB-1047. The bill also has endorsements from tech experts such as CEO Elon Musk, who has stated, “I have been an advocate for AI regulation, just as we regulate any technology that is a product that is a potential risk to the public.”
Implications
With AI’s increasing presence in social media, politics, and other social domains, Gen Z’s futures are severely at risk. For instance, research finds that Gen Z is more vulnerable to "deepfakes" —or AI imagery— due to the significant time they spend in the digital ecosystem. Over time, with a lack of regulation, AI could immensely compromise the information integrity of media platforms. This is simply one of the risks that unregulated AI poses to society.
We need to establish clear standards and safety protocols in legislation to protect consumers as well as our digital future.
Acknowledgment: The opinions expressed in this article are those of the individual author
Comments