trappedslider
Legend
the human partWhat is deemed to be the difference between an AI learning from previous works versus a human learning to write or draw from studying previous works?
the human partWhat is deemed to be the difference between an AI learning from previous works versus a human learning to write or draw from studying previous works?
the human part
Machines do not learn. If people copy somebody else's work, that is called plagiarism.What is deemed to be the difference between an AI learning from previous works versus a human learning to write or draw from studying previous works?
Neural networks lack subjectivity. They don't feel, have wants/needs, frustrations/experiences/traumas. Training doesn't change the way the models 'perceive' new information.What is deemed to be the difference between an AI learning from previous works versus a human learning to write or draw from studying previous works?
Thank you for linking to the actual proposed law!I don't really understand why you would bother running thought experiments about a bill and the impact it might have without reading it.
It's pretty short, take a look at it.
If you make it available to CaliforniansThank you for linking to the actual proposed law!
So basically it's saying that if you train an AI on copyrighted materials, you have to list the copyrighted materials you used. That seems like a reasonable thing to do.
This is about generative AI. AI for research works differently and is trained on raw data -pure "facts"- which isn't copyrightable. With this on the books no big company can stop a small competitor any more than they can already -through the very broken patent system-.To clarify: I'm neither pro nor anti-AI. It's too early (for me) to tell how I should feel about it. But, I am intrigued to explore the ethics of a technology that has had such a big impact in a short amount of time, as well as one which borders upon issues related to transhumanism and what exactly it means to think & learn.
If AI reaches a point where it has some capacity (even a limited one) to think and feel, I wonder how that might change the conversation.
In regards to the proposed law, I am curious if it extends to using AI for things such as astrophysics or medical research. Could a law meant to "protect" people be a reason why a pharmaceutical corporation sues a smaller competitor to block a cancer treatment from the public? I haven't yet read the law; this is simply an initial thought based upon how I have seen some other laws get twisted into being used differently than intended.
In regards to the larger AI conversation, it's not always easy to parse where "AI" begins and ends in comparison to digital tools in general. In the music industry, there are programs used to construct music and to autotune voices. Is it deemed more ethical to use digital tools to create the false illusion that a human being is a skilled musician than it is to completely fabricate a digital artist? I'm not sure.