Show why that’s a necessity in this scenario. Else it’s just wishful thinking.
Hardly. I don't wish for this at all. It is just a consequence of what you're insisting upon.
Do not start from the position that middle-men are immune - that part is wishful thinking. Historically, they
aren't immune.
Start from the position that a publisher can sue
anyone involved in the violations, because, historically, that has been the way of things. Over time, those suits have led to carve-outs for ISPs and the makers of hard drives and the like not being liable for the actions of users. But, those carve-outs have been much more rare for applications - like Napster, or Kazaa.
Also, do not start from the idea that the generative AIs are even "middle-men". The issue at hand isn't really about exact duplication of extant works like a Xerox machine. Generative AI goes beyond that, to creating entirely new works
with enough points of similarity to be infringing. That's playing a more active part than Xerox does.
So, when an infringing work comes out, your AI-maker is getting sued. They will lose, because they demonstrably do put out works with enough points of similarity to be infringing. The generative-AI
is guilty. Sorry.
In the past, your ISPs made arguments that they did not know, and could not control, what data moved over their wires. Your AI-maker doesn't have that argument. They
can control what data is in the system, and they
do know what requests are made. So, no carve-out for that.
Generative AI makers have tried to control what kind of requests they allow. They may offer that up here. But those controls generally suck, and are easy for users to circumvent, so more infringing content will be created, and we will go through this loop again, with those controls off the table.
From there - you don't want the AI company to control their data to only stuff they've licensed, and don't want enforcement to fall on the generative AI company? The rights holders will... rightfully.. then insist that the data on who is making what requests be handed over to aid enforcement. Use of generative AI will then be restricted to authenticated users whose activity is tracked, and periodically handed over to auditors for review.