I don't know how someone who sold a photo to Adobe to put in their stock photo market and then Adobe used those for a completely different purpose in order to put the original photographers out of work is an ethical AI
Yes, that is the creators' complaint. The companies' counterpoint is that it is
legal per the specific contract verbiage, so it's fine. It's stuck at that awkward intersection of "technically legal" and "ethical." This isn't a defense of Adobe, mind you. It's only to point out that companies
do have the means to train these AIs with clean datasets - for some definition of "clean" - and they
do have a motive to do so in order to minimize exposure to lawsuits.
Which is just to say that, since someone has succeeded in
legally training an AI using a specific dataset to mitigate questions of fair use, royalties, copyright, etc; there's no
technical reason someone won't
ethically train an AI on better datasets, for example only on public domain data, or only with creator consent, or with a built-in royalty scheme, or whatever. It really is only a matter of time.
And when they figure that out, that's a problem, because it means these fairly easily-resolved IP issues in the training vanish, and we're left just with the question
using the AIs. Shielded from the question of training that publishers have no control over, it shifts the ethical question entirely onto the publisher's shoulders:
Is it
really ethical to use an "ethically trained" AI, even if that still leaves creators with diminished income, or possibly even out of a job?