Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,600 words, this briefing is about a 7-minute read.
At a Glance.
- Governor Newsom vetoes California AI bill.
- EU begins drafting the AI Act’s code of practice.
Governor Newsom vetoes major California AI bill.
The News.
On Sunday, California Governor Gavin Nesom vetoed SB 1047, otherwise known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. With this move, Governor Newsom stated he vetoed the bill because its central focus on large-scale models could “give the public a false sense of security about controlling” artificial intelligence (AI). Governor Newsom continued stating that “smaller, specialized models may emerge [that are] equally or even more dangerous than the models targeted by SB 1047.” Governor Newsom closed his statement saying that “while well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” While this veto represents a setback in addressing AI technologies, Governor Newsom strongly emphasized that he still recognizes the significant importance of passing legislation to address AI and minimize its potentially harmful effects on society.
The bill’s author, State Senator Scott Wiener, wrote that “the veto is a missed opportunity for California to once again lead on innovative tech regulation - just as we did around data privacy and net neutrality - and we are all less safe as a result.”
The Knowledge.
Originally passed in August by the California State Legislature, SB 1047 has sat with Governor Newsom for a month before he decided to veto the bill a day before its September 30th deadline. For context, SB 1047 was centered on requiring major AI models to undergo stronger safety testing requirements before being released to the public as well as holding developers liable for any severe harm caused by their models. Notably, this liability would have only applied to models that have cost more than $100 million to train and no model has met that requirement.
However, since the bill was introduced and passed by the State Legislature, it has received mixed opinions from key stakeholders, developers, and political figures regarding whether or not it would establish critical guardrails or strangle AI development. Several notable AI developers, such as OpenAI, Google, and Meta, expressed concerns that the legislation would unfairly target developers rather than hold AI abusers more accountable. Additionally, many of these critics have argued that these major regulations should be defined at the federal level rather than at the state level. Countering the bill's critics, major developers like Anthropic as well as former employees of leading AI companies emphasized that the benefits of the bill would have outweighed the risks. Nonetheless, with this veto, it is unlikely that any major AI legislation will be passed in the US over the coming months.
The Impact.
With Governor Newsom’s veto, the effort to pass comprehensive AI legislation has taken a major step back. Despite SB 1047 being highly contentious, the bill was the only major piece of AI legislation at the state or federal level that had a notable chance of being passed. However, despite this setback, California has still passed several other smaller AI laws over the past few weeks that have more clearly defined who is accountable for AI abuses and better-defined ways to protect citizens.
For AI developers and key stakeholders, people should expect the momentum to pass major AI legislation at both the state and federal levels to grow. Additionally, whatever new legislation is passed likely will incorporate much of the feedback that was raised in response to SB 1047. Lastly, people should continue to monitor any proposed legislation to understand what restrictions and requirements will likely be imposed on developers and what new standards will be instituted to better protect AI users and notable figures from AI abuse.
EU picks experts to steer AI compliance rules.
The News.
On Monday, the European Union (EU) selected several AI experts who will outline the various measures that businesses will use to comply with the incoming AI Act regulations. This announcement comes as the European Commission plans to hold its first plenary meeting of expert working groups that will be focused on creating the AI Act’s “code of practice.” This code of practice “aims to facilitate the proper application of the AI Act’s rules for general-purpose AI models, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures.” Notably, this code of practice will not be legally binding, but it will provide firms with a checklist that they can use to demonstrate their compliance with EU regulators.
After their Monday meeting, the working groups will meet three more times before their final meeting in April 2025, where they will present the code of practice to the European Commission for approval.
The Knowledge.
While this working group has yet to release any details on what their code of practice will explicitly outline, this code will be used to aid companies in ensuring their compliance with the EU’s AI Act.
For context, the EU’s AI Act centers around regulating AI and is the first major global AI act. This major regulation centers around:
- Minimizing the major risks associated with AI systems.
- Increasing transparency requirements for AI developers.
- Supporting greater innovation to develop and train AI models before their release.
However, despite the EU Council approving this law in May 2024, the Act will progressively go into effect over the next two years. A core aspect of this act involved establishing this code of practice to help AI developers better understand and comply with the Act’s regulations before they fully go into effect in 2026. Aside from the code of practice going into effect in April 2025, another notable effect that will go into effect later that year will involve mandating rules regarding general-purpose AI systems needing to comply with transparency requirements.
The Impact.
As the EU’s AI Act continues its rollout over the next two years, AI developers and businesses planning on utilizing AI models within the EU should closely monitor the progress of these working groups to ensure they understand what the final form of this code of practice will entail. While this code will not be a legally binding document as previously mentioned, it will function as a key standard that regulators will use to determine whether or not a business is complying with the AI Act.
While these regulations will not impact AI developers or EU citizens for several months, ultimately, these steps have been seen as key actions that will not only better secure AI systems but will also go a long way to protecting citizens across the EU.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sat down with Dmitri Alperovitch, the author and Chairman of Silverado Policy Accelerator to discuss his book, “World on the Brink: How America Can Beat China in the Race for the Twenty-First Century.” Throughout this conversation, Ben Yelin talked with Dmitri as he laid out his argument for why Xi Jinping is preparing to conquer Taiwan in the coming years and the associated fallouts if this reality were to come true. During this episode, you will be able to listen in as Alperovitch explains how we can play to our strengths, manage our weaknesses, and leverage our global position over the coming years.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
US reaches $31.5 million settlement with T-Mobile over data breaches.
What: T-Mobile and the Federal Communications Commission (FCC) have reached a $31.5 million settlement to resolve the FCC’s probe into the company’s numerous, significant data breaches.
Why: On Monday, T-Mobile agreed to pay a $15.5 million civil penalty and has agreed to make an additional $15.75 million investment over two years to strengthen its cybersecurity program. For context, this probe was initially launched after T-Mobile suffered several data breaches from 2021 to 2023 impacting millions of current and former customers. With this announcement, the FCC stated that T-Mobile will work to address its “foundational security flaws, work to improve cyber hygiene, and adopt robust modern architectures, like zero trust and phishing-resistant multi-factor authentication.” T-Mobile also released a statement with this announcement stating that the company takes “[their] responsibility to protect our customers’ information very seriously” and added that it has “made significant investments in strengthening and advancing our cybersecurity program and will continue to do so.”
Judge partially dismisses FTC’s antitrust lawsuit against Amazon.
What: A federal judge has partially dismissed a portion of the Federal Trade Commission’s (FTC) antitrust lawsuit against Amazon potentially dismissing key components of the lawsuit.
Why: On Monday, District Judge John Chun made this court ruling to dismiss a portion of the lawsuit; however, the details regarding what has been dismissed are still filed under seal. This ruling comes after Amazon asked the judge to dismiss the FTC’s lawsuit last December stating that their alleged anticompetitive behaviors are “common retail practices that presumptively benefit consumers.”
For greater context, this lawsuit emerged when the FTC and seventeen other states sued Amazon last September accusing the company of engaging in anticompetitive practices that raise prices for shoppers and have extracted excessively high fees from sellers. Additionally, the agency alleged Amazon used other tactics that deterred retailers from offering lower prices and made it more expensive for sellers to offer their products on other platforms.