Disney sues Midjourney

Building it ethically is easy. License the content if you don't own it. Pay your darned royalties, and the basic ethical consideration goes away.
This is basically it. It really is that simple. Introduce legislation which says that AI's must be trained on correctly licensed data, and voila!

I mean, sure, some people will ignore the law, as always--but that isn't a reason not to have laws against things, and having the law means there is a clear path of resolution.
 

log in or register to remove this ad

Building a system for identifying, contacting, negotiating with and ultimately paying copyright holders would certainly not be "easy."
That sounds like their problem, not ours.

And I think that is where we'll end up. It won't be as bad as you think; new types of agencies will negotiate with studios and publishing houses and compile and license data to the AI companies. A new infrastructure will gradually emerge. Unions, client agencies, publishing houses, regulatory bodies, licensing boards, all these organisations emerged to resolve various "difficult" problems over the centuries. But the difficult licensing problem isn't that difficult--it's not rocket science, and industries will adapt to accommodate it.

An AI company will license one or more data sets from an agency, which itself will have gathered and compiled that data set by licensing the content from other companies, some of which will deal with the owned IP of entire studios, others of which will deal with individual artists. It'll all be one massive filtering system, lots of people will get rich being the middle-men, but it's very doable. And the end artist gets royalties which have filtered down through the system. All the AI company did was find a dataset online they need and click "purchase".
 

That sounds like their problem, not ours.

And I think that is where we'll end up. It won't be as bad as you think; new types of agencies will negotiate with studios and publishing houses and compile and license data to the AI companies. A new infrastructure will gradually emerge. Unions, client agencies, publishing houses, regulatory bodies, licensing boards, all these organisations emerged to resolve various "difficult" problems over the centuries. But the difficult licensing problem isn't that difficult--it's not rocket science, and industries will adapt to accommodate it.

An AI company will license one or more data sets from an agency, which itself will have gathered and compiled that data set by licensing the content from other companies, some of which will deal with the owned IP of entire studios, others of which will deal with individual artists. It'll all be one massive filtering system, lots of people will get rich being the middle-men, but it's very doable.
If we can convince legislatures to enact such regulations. Unfortunately in the US, getting Congress to do anything related to AI has been next to impossible. And now with so much regulatory capture by the tech industry, it is even less likely.
 


As an actual example: many years ago, back before the term "generative AI" was coined, I personally did potential physics doctoral thesis research on using the technology to simulate the data that comes out of particle accelerators, to help design and tune their detectors and data processing software.
With no disrespect, when you were doing your PhD they were probably still using punchcards to run the computers - can this meaningfully be said to be the same technology? (This is an actual question not an attempted gotcha. Well, the first part is just a needless jab at your immense age, but the second part is a genuine question).
No ethical issues there - the training data would be the output of previous accelerators, not from humans depending on copyright to make a living off it.
There certainly are ethical ways to obtain training data for many potential applications - it’s very disappointing that it hasn’t been done in most instances.
 
Last edited:

I’m just gonna hope these two sides (AI use of copyrighted material versus the endless copyright holding company) just keep knifing each other in court.
 


At this point, lawsuits like Disney's will not "end AI." Based on worth, OpenAI could just buy Disney and be done with it.
Is OpenAI public? Only public companies can buy other companies through shares. And beyond that, OpenAI needs every penny to remain operational. It's costs are inmense. There's not enough to divert to other stuff like legal costs.
 

At this point, lawsuits like Disney's will not "end AI." Based on worth, OpenAI could just buy Disney and be done with it.
Buying Disney would cost way more than any lawsuit will cost them. Plus, you can’t just turn around and buy companies. You can’t even just turn around and buy a controlling shareholding. It just doesn’t work like that.

And your company’s valuation is not a pile of cash you have available to spend. Again, that’s not how it works.

TL;DR version -- No, OpenAI cannot buy Disney. Not even close.
 


Remove ads

Top