While I can readily imagine an ethical system that would make AI wrong to people who adhere to it, I really don't understand why you can't fathom a legal system for AI to function.
1. Have a law that explicitely do not include "AI training" among the thing an IP holder can allow or disallow, including it in the already long list of exceptions (or, depending on how the law is written in the country we consider, don't include it in the limitative rights exclusively given to the author). Several countries already did, so it's easy to see how they did it.
2. As an editor, ensure that the images you select from AI to include in your end product aren't close enough of an existing artwork to infringe on this particular piece's copyright -- the same they already do when accepting a commissioned work.
With regard to whether one should pay for, well, have WotC ask the LLM for them, I'd say I wouldn't pay, since I can do it myself. But several people are happy to send money to WotC for taking the pain of (hiring an author, hiring an illustrator, print the book), all steps they could conceivably do themselves. So there are people willing to pay for the convenience of not doing it themselves and doing the quality control over the end result. I don't think people would be willing to pay the same price as they do now if there is an easy way to do this at home, but I am not sure 100% of people would stop buying AI products either. Some people are right now willingly buying AI-made novels on Amazon, despite being able to generate them at home already.