Some thoughts:
1) It isn't really mimicking humans in that it isn't learning like humans learn, and isn't thinking like humans think. People tend to get pretty upset if you imply this.
2) But, it is mimicking to the extent it is taking in lots of data and then outputting something new. Just at a very rough level.
3) It's an outstanding question how much training data is retained in the models. Things that are ubiquitous (Shakespeare, Mario) can be reproduced exactly. Things that only occur in rare contexts (i.e., something following a set of words that occurs only once in the training set) can also be reproduced directly.
4) Because of (3), some argue they are not outputting new things, but just copying things from the training data. This means they effectively include copywritten material in the model.
5) However, they also transform and modify the output substantially. The goal of AI is explicitly not to copy, and direct copying is considered undesirable. So what they are doing is distinct from say, downloading a bunch of pdfs and letting the user peruse them.
6) Patches and workarounds for (4) might not eliminate the copywritten data in the model. So even if the model isn't outputting copywritten stuff, it still is there, buried, and that stuff is necessary for it to produce what it produces. It's relying on copywritten things even if it is not producing them.
7) Some people are upset at the idea of training on copywritten stuff at all, even if the generative program could never reproduce it. They view it as different from say, a human reading. Part of this may depend on how the data is accessed. A NYTimes subscription may be fine for a human to read and learn but not a program.