In abstract
A gaggle of educational luminaries shaped by Gov. Gavin Newsom final 12 months is incomes reward for its AI coverage suggestions.
A gaggle of synthetic intelligence luminaries convened by Gov. Gavin Newsom issued what is predicted to be an influential set of suggestions Tuesday, pushing state lawmakers to convey higher transparency into how AI fashions are made and operated.
The proposed steps will result in advances in each innovation and public belief, the tutorial consultants write of their draft reportserving to the state steadiness experimentation with guardrails defending in opposition to AI harms.
Newsom shaped the group final fall as he vetoed a outstanding invoice to manage AI, arguing the measure would curtail innovation.
Recognized formally because the Joint California Coverage Working Group on AI Frontier Fashions, the group instructed that lawmakers:
- Encourage firms constructing superior AI fashions to reveal dangers and vulnerabilities to builders making their very own variations of the fashions.
- Consider superior AI fashions utilizing an impartial, outdoors social gathering.
- Think about enacting guidelines to guard whistleblowers.
- Consider the attainable want for a system to tell the federal government when personal firms develop AI with harmful capabilities.
Scott Wiener, the Democratic senator behind the invoice Newsom vetoed, praised the report and stated it could affect a scaled-down model of his measure referred to as Senate Invoice 53.
“The recommendations in this report strike a thoughtful balance between the need for safeguards and the need to support innovation,” he wrote in an announcement shared with CalMatters. “My office is considering which recommendations could be incorporated into SB 53, and I invite all relevant stakeholders to engage with us in that process.”
The draft report doesn’t argue for or in opposition to any explicit piece of laws at present into account however might maintain heavy affect over the 30 payments to manage synthetic intelligence now earlier than the Legislature. These measures embody roughly half a dozen payments to handle how AI raises prices of products and others aiming to mitigate how the expertise impacts the surroundings, public well being, and rising power charges. One other invoice would require companies to reveal when AI is used to make vital choices about individuals’s lives. Enterprise teams lobbied closely in opposition to such laws final session.
The draft report highlighted AI guidelines on the books in locations like Brazil, China, and the European Union. It said that California’s guidelines will play a singular and highly effective function as a result of its place as the house of many main AI firms and analysis establishments.
“Without proper safeguards…powerful AI could induce severe and, in some cases, potentially irreversible harms,” the draft report reads. “Just as California’s technology leads innovation, its governance can also set a trailblazing example with worldwide impact.”
Members of the general public have till April 8 to remark and share suggestions earlier than the suggestions are anticipated to be finalized this summer time.

Authors of the report embody Mariano-Florentino Cuéllar, president of the Carnegie Endowment for Worldwide Peace; Jennifer Tour Chayes, dean of the UC Berkeley School of Computing, Knowledge Science, and Society, and Fei-Fei Li, former chief AI scientist at Google Cloud and creator of a pioneering AI mission referred to as ImageNet. Li is usually known as a godmother of AI and her views have been wanted by members of Congress and the Biden administration.
The group targeted on “frontier models,” essentially the most cutting-edge types of synthetic intelligence, equivalent to OpenAI’s ChatGPT, which dates from late 2022, and R1, a more recent mannequin from Chinese language firm DeepSeek. California-based firms together with Anthropic, Google, and xAI are additionally creating superior general-purpose AI methods.
Frontier fashions promised improved effectivity, as once they assist lecturers grade writing assignmentsbut additionally carry dangers. They can be utilized by scammers, allow the unfold of disinformation and perpetuate bias. Hype and worry of frontier fashions led members of the general public to think about whether or not AI might play a job in human extinction.
The draft report is certainly one of various paperwork produced by the state of California lately, together with one concerning the advantages and dangers of generative AI in late 2023 and one other concerning the impression of generative AI on weak communities in late 2024. Neither report was talked about by the working group.
A consultant of tech enterprise pursuits praised the report. Megan Stokes, state coverage director for the Pc & Communications Trade Affiliation, stated the working group took nice care to survey current legal guidelines that shield Californians from potential AI harms and to assessment current regulatory authorities, serving to to make sure that new laws usually are not duplicative. Stokes’ group opposes a invoice that will require builders to reveal use of a creator’s copyrighted materials earlier than coaching an AI mannequin. Copyright infringement is a present threat acknowledged by the working group.
Jonathan Mehta Stein, chair of advocacy group California Initiative for Expertise and Democracy, stated that whereas the working group’s draft report incorporates coverage suggestions, it primarily urges that California wait and see — leaving lawmakers with little path on greatest insurance policies to pursue. That conclusion dangers stunting the momentum of present laws geared toward tackling identified, documented harms, he added. His group, which cosponsored three payments final 12 months to guard voters from AI, needs the working group so as to add extra actionable legislative suggestions into its last report.
“If California wants to lead on AI governance and on building a digital democracy that works for everyone, it must act and act now,” Stein stated in a written assertion “California sitting on our hands because industry is uncomfortable with regulation does not mean industry will be free of regulation. Regulation is coming. Inaction by California simply means other states will pass regulations and set the terms of AI governance, and California will cede its leadership.”
“There’s something for people on both sides.”
The Co-founder, Safe, Secrere, Undertaking, Undertaking,
On the charge that the expertise is altering, the draft report is correct to level out that the window to manage AI could also be closing quickly, stated Koji Flynn-Do, co-founder of Safe AI Undertaking, a gaggle established in December 2024 that beforehand supported the Wiener AI invoice that Newsom vetoed He stated it’s heartening to see the report deal with security and safety protocols and testing to mitigate dangers alongside a letter by workers of frontier AI firms calling for whistleblower protections.
“People will say that it goes too far, some people say that it doesn’t go far enough, and I think there’s something for people on both sides,” he stated.
The draft report “seems like progress to me,” stated Daniel Kokotajlo, who additionally endorsed the AI security invoice proposed by Wiener final 12 months. He’s a signatory of a letter written by present and former workers of firms constructing frontier fashions that requires whistleblower protections and an adversarial occasion reporting system. The righttowarn.ai letter is cited by the working group within the draft report.
“I want to see more specific proposals, like these companies should do this and these regulations should be passed, but it’s still progress to be talking about these things at all.”