That is unreasonable, given than the AI can reproduce copyrighted material even when the user didn't ask it to. The user cannot reasonably be aware of every piece of copyrighted content in existence so that they could recognise when the AI has done so.
It is not "unreasonable". While I agree that no user can be aware of every bit of copyrighted material, the article uses generalized enough prompts that such a user
should (in all likelihood) realise the prompt result infringes on copyrights.
Now, barring that, it is simple enough for a user who has (mistakenly) used an AI image with infringes to remove it when it is brought to their attention. If monetary gain has already occurred, that user assumes the responsibility of paying any penalties, etc. that might be levied.
I'm sure AI models will have more and more safe guards against this sort of thing in the future, but for the present if a user agrees to use a potentially flawed system, they are accepting the responsibilities and pitfalls of using that system.
Think of it like this. If a company releases a beta test OS (which likely still could have flaws) and you agree to install it knowing those risks, the responsibility is now yours.
In that light, I do believe all AI models should have a "this system could create material which is copyright infringement" clause.