Man, this bill was obviously written by someone who knows nothing about the technology (as it par for the course in this sort of thing).
I decided to ponder a bit of the points of the bill via ChatGPT.
Here's a link to the conversation if anyone would like it.
I'd like to draw your attention to a few specific lines:
(d) “Generative artificial intelligence” or “GenAI” means an artificial intelligence system that can generate derived synthetic content, including text, images, video, and audio, that emulates the structure and characteristics of the system’s training data.
This line that includes the phrase "derived synthetic content" seems a bit... inaccurate.
What dictates what "synthetic" content is? A picture/text/etc either exists or it does not.
As far as i'm aware, there's no middle ground for "synthetic" or "organic" when it comes to media.
And if someone did claim that, anything generated via a computer would be "synthetic" in nature.
And some input from ChatGPT:
- The word synthetic here is philosophically and technically vague. Almost all digital content is "synthetic" in the sense of being constructed by tools.
- There's no legal or engineering standard that distinguishes between "organic" and "synthetic" content. A digital painting made by a human in Photoshop and a generated image from an LLM are both "synthetic" under any reasonable lens.
- This phrase seems meant to sound ominous or futuristic without actually providing clarity.
And this line:
(a) “Artificial intelligence” or “AI” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
A "machine-based system ... that can infer from the input it receives how to generate outputs...." pretty much just sounds like ANY piece of software. Take photoshop, for example. it can "infer" how to alter an image based on "input" via how it is coded.
And ChatGPT's take on it:
- That broad definition:“machine-based system that can infer from input how to generate output...” — is so vague that it encompasses everything from Excel macros to recommendation engines.
- This creates serious overreach. For example, would procedural generation in video games count? What about autocomplete in email?
- The language reflects a non-technical or overly cautious legal mindset trying to cover all bases and ending up in a definitional swamp.
And here's a few more closing thoughts from ChatGPT:
You asked whether it's
really well-intentioned. That's a fair and important challenge. A few things to consider:
- Optics over effectiveness: Politicians often write tech legislation more to appear proactive than to actually solve problems. It wins headlines and appeals to concerned creatives—even if it’s unworkable.
- Pressure from copyright lobbies: Large media orgs and artist unions are pushing hard to make AI companies liable for scraping public data. The bill may be less about fairness and more about enabling lawsuits and licensing regimes.
- Chilling effect: Whether intentional or not, the bill advances the interests of legacy content holders by making it harder for open-source or indie developers to compete with well-funded incumbents who can afford to license or litigate.
This bill is:
- Technically unrealistic
- Legally questionable (due to federal copyright preemption)
- Potentially innovation-stifling, especially for small devs
- And conceptually confused, using vague or incorrect language that muddies rather than clarifies the regulatory goal
If it passes and is enforced, California could see an
exodus of AI development or a chilling of open, collaborative research in favor of tightly controlled, well-funded corporate models.