California bill (AB 412) would effectively ban open-source generative AI


log in or register to remove this ad






Do I need to pay an artist if i train myself on their work?

Depends on where you are, I guess.

If the Louvre, you can apply to get an easel and a seat to copy a painting without paying anything. If the work you copy isn't in the public domain, you can't resell your copy, but you can do it as much as you want (actually, they require you to finish it within 3 months but it's to avoid people keeping their seat forever), provided the copy is 20% larger or smaller than the original to avoid any potential confusion. On the Internet, the same rule would apply: if, after your day spend with a brush and an easel, you look at a computer image from your hotel room in front of the Louvre and try to reproduce it to train yourself, you'll be doing a private use copy, which is outside the scope of IP rights.
 
Last edited:

Depends on where you are, I guess.

If the Louvre, you can apply to get an easel and a seat to copy a painting without paying anything. If the work you copy isn't in the public domain, you can't resell your copy, but you can do it as much as you want (actually, they require you to finish it within 3 months but it's to avoid people keeping their seat forever), provided the copy is 20% larger or smaller than the original to avoid any potential confusion. On the Internet, the same rule would apply: if, after your day spend with a brush and an easel, you look at a computer image from your hotel room in front of the Louvre and try to reproduce it to train yourself, you'll be doing a private use copy, which is outside the scope of IP rights.
I've done that!

It might be worth noting that historically, many artists did learn by paying an established artist to teach them.
 

Do I need to pay an artist if i train myself on their work?
Spoken like someone who has never tried to do that. Just a really pointless gesture towards an imagined argument that you're apparently not willing to actually make. Like, either make the argument or don't, imho.

Learning from an artist as a human takes significant interest, skill, and even commitment. I'm the kind of artist who is naturally talented at replicating the art and style of other artists rather than having their own distinct style (though I also lack the sheer discipline and rigour required to be a true forger), and even for me I know this isn't trivial. What AI art does isn't really learning the style either - it's just replicating the most superficial elements of it, and spamming out copies in a "million monkeys" kind of way. It's no accident that people who like to use it this way tend to prefer art with a necessarily simple style (like animation), or where some degree of "generic-ness" is expected.

On the specific article, the EFF is right and wrong here.

The idea that what companies like OpenAI have done is "probably" legitimate "fair use" is laughable, and their claim that it is, is frankly despicable and undermines the EFF's fundamental mission by its very obvious dishonesty and hypocrisy (they're prejudging things far worse than California is). The reality is this is likely to be regarded by US courts as a legal loophole - neither fair use (which it definitely isn't, it simply doesn't fit the definition) but also not necessarily a clear-cut copyright violation. The EFF is seemingly taking the fundamentally dishonest tack a lot of AI boosters have, which is that something is either fair use OR a copyright violation, but that's simply not what fair use is - it's a false dichotomy that attempts to pass off "we don't yet have a law for this novel crime" or "that's a legal loophole/grey area" as "it's totally okay and approved of!".

They're also wrong to suggest California is running ahead of the courts in the way they claim, because what California is actually doing here, is essentially forcing AI companies to keep records so that in, eventual, future court cases, effective action can be taken. Companies like OpenAI intentionally avoided keeping records of what exactly they took/borrowed/stole so that they could evade legal consequences merely by claiming they "didn't know". Regardless of how courts rule, forcing companies engaging in this to keep records of their copyright'd training materials is a positive step. If every court in the land rules that AI training on copyright'd materials is super-legal (unlikely), then this law can (and probably would) simply be reversed/removed. It's not written in stone until the end of time!

The EFF are right in that this will serve, in practice, to make it much harder to break into the LLM AI space, but again the EFF is disreputably dishonest in their argumentation. I am genuinely saddened to see them essentially throw themselves in front of a bullet for companies many of whom actively oppose the EFF's goals. Why? Because they pretend this is about "mom and pop shop" AI startups, with like 2 people (their actual example!), which is laughable gibberish. That's not a thing. The problem isn't finding material to legally train models on. The problem is the huge energy costs associated with running AI server farms. No "2 person startup" can afford those anyway.

The real problem here, and the reason OpenAI etc. don't really oppose this is that it's not retroactive. So companies who already stole everything they could (including stuff that WAS nailed down!) aren't very impacted.
 
Last edited:


Remove ads

Top