The models use statistical analysis to produce, which is a very different method than the human brain.
Given that we haven't a clue how the human brain works, that you would confidently declare that amazes me. How the heck do you know what method the human brain uses? Go ahead and win a Noble prize and a lot of other acclaim by revealing such secrets of the mind.
Let me go for a simpler one for an example, Markov chains. It's early predictive text - if you have this word, what are the words that come next in a corpus and the chances for each? It can do it with more multiple words before as well. It can put together sentences, much like your autocorrect can. Even if you trained it on every book you've ever read, it is not the same way as a human author putting something together.
That's not clear to me at all. When I was a younger naive software engineer I always imagined that one day we'd get this Turing grade AI's and I'd interact with them and I'd be forced to conclude that they were intelligent because I couldn't distinguish them from a human. But that's not what has happened at all. Instead, its been obvious from the start that the current generation AI were as sentient as bricks, but the really strange thing is the more that I interact with them the more I realize interactions with humans have the same flaws and patterns. The more I interact with AI, the less obviously sentient or intelligent in the sense that I had assumed humans become. It's not at all clear how humans produce speech or why they produce speech, but it could be underneath that there is just some predictive text rendered in biological form. I've had to overturn all my preconceptions about how intelligence worked and how language worked. The sense/refence model no longer is big enough and complete enough to describe what is going on.
There are currently missing elements and algorithms that humans have that AI lack or which haven't been integrated together in interesting ways, sure, but that's coming fast.
I was watching Deep Blue live against Kasparov about 25 years ago, and in the final match Deep Blue began playing an unusual sequence while Kasparov had a pawn advanced to the seventh row, and the commentators - experts in chess - where saying on the broadcast, "Well, this is typical of computer play. The AI is unable to reason about the impact of a promoted pawn on the board, or else its foreseen Kasparov's win and its stalling. Computers will never be able to defeat humans in cheese because they lack true imagination and true creativity. You need a human spirit to truly understand chess." (I'm not making this up. I may forget the exact words, but this is the sort of stuff they were saying.) And in the middle of this rant, Kasparov suddenly resigned. And the commentators were dumbfounded. "Why has Kasparov resigned?" And several seconds passed, and one of these experts said, "Because... it's mate in two?!?!" In two mind you? In two moves! It wasn't just that suddenly it turned out that imagination and creativity and actually understanding cheese were just algorithms and predictive ability, as I had fully expected that. What I really discovered then was humans weren't very good at chess at all, because the chess world was watching this and it took all of them to the last moment to even see what the computer was doing. Maybe Kasparov had seen it earlier or not. But the chess world was by and large oblivious. I'd witnessed by first Turing grade AI, and I realized that being indistinguishable from human was strictly domain dependent.
The exact text or the exact form of an image isn't being stored in the neural networks being generated by reading the text or looking at the images. We don't know exactly what it is that is being stored, but we do know for sure it isn't a copy or a compression or anything like that. So if an AI mind stores something it learns from reading a text or scanning an image, how is that fundamentally different than me with my meat brain storing something I learn from reading a text or scanning an image? And if you digitize my mental process so that it can be done faster, does it become a copyright violation just because you now find it more threatening? And if an AI produced image wouldn't be a copyright violation if it was produced by a human mind, how does it become a copyright it was produced by an artificial mind?
There is a fundamental axiomatic assumption by the zealots that this process is inherently theft but I think that assumption is unwarranted and not really supportable. If I read a book and retain some impression of that book in my mind, the copy in my mind isn't a copyright violation. It only becomes a violation of copyright if I reproduce it in some fashion that would violate copyright, and neither the storage mechanism of these AI nor the way they produce images inherently violates copyright. So no theft has occurred. If someone trains an AI on what is publicly available on the net, well, that was not an ethical violation that I could see. The whole point of intellectual property protection is to encourage innovation. It's not there to stop innovation. The writers of this software have done maybe the most innovative thing with human language since it was invented. It's not theft.