California bill (AB 412) would effectively ban open-source generative AI


log in or register to remove this ad



What is deemed to be the difference between an AI learning from previous works versus a human learning to write or draw from studying previous works?
Neural networks lack subjectivity. They don't feel, have wants/needs, frustrations/experiences/traumas. Training doesn't change the way the models 'perceive' new information.

On the other hand human experiences change the way we connect with a work. Sometimes biology can change the way we produce. Women see more colors than men, Monet was a tetrachromat, some people have perfect pitch.

Age, circunstances, health, mood, even time of the day affect the way we interact with media. We aren't procedurally copying and storing information. Everything we produce is a reflection of our experiences, wants and needs.

AI does nothing of that. Even animals do some of it, and they have been deemed as unworthy of having copyright.
 

AI is just copying numerical figures in a database, and then modifying them. It sees or knows nothing. AI people want us to look at this amazing new thing, though for those with science and engineering degrees, see that it is just a math trick. The one CS person in my group has had no math classes, while for me, it was ever more rigorous, such as on from statics to thermodynamics, every year. If some billionaire oligarch is peddling a product that is actively detrimental, such as AI "art" it is perfectly normal and good to reject it.
 

To clarify: I'm neither pro nor anti-AI. It's too early (for me) to tell how I should feel about it. But, I am intrigued to explore the ethics of a technology that has had such a big impact in a short amount of time, as well as one which borders upon issues related to transhumanism and what exactly it means to think & learn.

If AI reaches a point where it has some capacity (even a limited one) to think and feel, I wonder how that might change the conversation.

In regards to the proposed law, I am curious if it extends to using AI for things such as astrophysics or medical research. Could a law meant to "protect" people be a reason why a pharmaceutical corporation sues a smaller competitor to block a cancer treatment from the public? I haven't yet read the law; this is simply an initial thought based upon how I have seen some other laws get twisted into being used differently than intended.

In regards to the larger AI conversation, it's not always easy to parse where "AI" begins and ends in comparison to digital tools in general. In the music industry, there are programs used to construct music and to autotune voices. Is it deemed more ethical to use digital tools to create the false illusion that a human being is a skilled musician than it is to completely fabricate a digital artist? I'm not sure.
 


I don't really understand why you would bother running thought experiments about a bill and the impact it might have without reading it.
It's pretty short, take a look at it.
Thank you for linking to the actual proposed law!

So basically it's saying that if you train an AI on copyrighted materials, you have to list the copyrighted materials you used. That seems like a reasonable thing to do.
 


To clarify: I'm neither pro nor anti-AI. It's too early (for me) to tell how I should feel about it. But, I am intrigued to explore the ethics of a technology that has had such a big impact in a short amount of time, as well as one which borders upon issues related to transhumanism and what exactly it means to think & learn.

If AI reaches a point where it has some capacity (even a limited one) to think and feel, I wonder how that might change the conversation.

In regards to the proposed law, I am curious if it extends to using AI for things such as astrophysics or medical research. Could a law meant to "protect" people be a reason why a pharmaceutical corporation sues a smaller competitor to block a cancer treatment from the public? I haven't yet read the law; this is simply an initial thought based upon how I have seen some other laws get twisted into being used differently than intended.

In regards to the larger AI conversation, it's not always easy to parse where "AI" begins and ends in comparison to digital tools in general. In the music industry, there are programs used to construct music and to autotune voices. Is it deemed more ethical to use digital tools to create the false illusion that a human being is a skilled musician than it is to completely fabricate a digital artist? I'm not sure.
This is about generative AI. AI for research works differently and is trained on raw data -pure "facts"- which isn't copyrightable. With this on the books no big company can stop a small competitor any more than they can already -through the very broken patent system-.

Proponents of gen AI try to claim it is only a tool, but it isn't the same way that other tools -like even analytical AI- because due to its blackbox nature it is extremely hard to control. The skill ceiling is too low. Other tools still need a human in the control chair, and that human needs a degree of skill to do it. Also, these tools aren't made by copying finished works.

Like I keep parroting, the biggest point of opposition is the rampant piracy/using without consent of copyrighted materials. There are others like the environmental impact and the harm to the users mind, but outside that I don't actually mind it. Pureñy synthetic AI voice is actually attrocious and not a menace to real people.
 

Remove ads

Top