No we aren't. The images aren't one step from Pinterest as the people claiming theft & similar keep saying. When the discussion is "theft, stealing, unethical" >"well actually" > "nope never happened, the sky is purple" the nuances aren't something that has room for exploration.Hang on, are we just ignoring that 90% of uploads to Pintrist aren't done by the image owner?
Like, I can find most of the D&D 3E and Pathfinder artworks on there and I can sure tell you, they sure weren't uploaded by WotC or Pathfinder. Pointing to a TOS is kind of pointless when what people know about Pintrist is that it just, is basically a piracy website if what you're specifically pirating is images.
Since you bring it up though... Pinterest is only one of the sites where images were pulled from & the sites + their percentages were listed earlier. Pinterest was 8.5% of the images if I'm reading it right. The whole dataset was publicly available common crawl data.
There were several lawsuits filed yes, but filing a lawsuit & showing standing or harm are different things & I don't think any have even progressed to the point of opening arguments or they are doing so in a way that is completely obscured to the media. Here is a pretty good article & relevant quote
“Unfortunately, I expect a flood of litigation for almost all generative AI products,” Heather Meeker, a legal expert on open source software licensing and a general partner at OSS Capital, told TechCrunch via email. “The copyright law needs to be clarified.”
Content creators such as Polish artist Greg Rutkowski, known for creating fantasy landscapes, have become the face of campaigns protesting the treatment of artists by generative AI startups. Rutkowski has complained about the fact that typing text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks very similar to his original work — threatening his income.
Given generative AI isn’t going anywhere, what comes next? Which legal cases have merit and what court battles lie on the horizon?
Eliana Torres, an intellectual property attorney with Nixon Peabody, says that the allegations of the class action suit against Stability AI, Midjourney, and DeviantArt will be challenging to prove in court. In particular, she thinks it’ll be difficult to ascertain which images were used to train the AI systems because the art the systems generate won’t necessarily look exactly like any of the training images.
State-of-the-art image-generating systems like Stable Diffusion are what’s known as “diffusion” models. Diffusion models learn to create images from text prompts (e.g. “a sketch of a bird perched on a windowsill”) as they work their way through massive training datasets. The models are trained to “re-create” images as opposed to drawing them from scratch, starting with pure noise and refining the image over time to make it incrementally closer to the text prompt.
Content creators such as Polish artist Greg Rutkowski, known for creating fantasy landscapes, have become the face of campaigns protesting the treatment of artists by generative AI startups. Rutkowski has complained about the fact that typing text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks very similar to his original work — threatening his income.
Given generative AI isn’t going anywhere, what comes next? Which legal cases have merit and what court battles lie on the horizon?
Eliana Torres, an intellectual property attorney with Nixon Peabody, says that the allegations of the class action suit against Stability AI, Midjourney, and DeviantArt will be challenging to prove in court. In particular, she thinks it’ll be difficult to ascertain which images were used to train the AI systems because the art the systems generate won’t necessarily look exactly like any of the training images.
State-of-the-art image-generating systems like Stable Diffusion are what’s known as “diffusion” models. Diffusion models learn to create images from text prompts (e.g. “a sketch of a bird perched on a windowsill”) as they work their way through massive training datasets. The models are trained to “re-create” images as opposed to drawing them from scratch, starting with pure noise and refining the image over time to make it incrementally closer to the text prompt.
That bold underlined bit is important because of how art can be substantially similar & not be infringing. There is a good writeup on it complete with an example here that is very much worth reading. That substantially similar bar is part of why it orobably doesn't matter even if such images were used in training. In short NBC (magnumPI)& Lucas film(Indiana Jones) have a rock solid case against Disney for decades of chip & dale doing whatever technical legal wrong may have been done by the AI if any of those lawsuits produce results of some form... Well at least assuming that either of those two can meet the lofty bar of proving that it is not just a meme.