Yep. This is the fundamental problem with all such "predictive text" structures (indeed, almost all neural networks.) Their fundamental nature, as I've said elsewhere, is purely syntactic, not semantic. You can do an awful lot with a very powerful syntactic model; human syntax encodes a great deal of information, especially in fixed word order languages like English. But you cannot do several really basic things, because syntax, no matter how advanced, cannot encapsulate the meaning and significance of the things that appear in the structure.It's interesting spotting the AI-ness of it. The words are well structured and make good grammatical sense, but it doesn't reason very deep. Especially when it's provided nouns as prompts, it doesn't dive into the implications of thet things those nouns describe. it's more an exercise in ticking boxes and filling in gaps in the boilerplate.
And if you can get an AI that actually operates on semantic content, not just syntactic content...you'd have something shockingly close to a true general intelligence, maybe even actually a general intelligence. One that would be, in at least some ways, actually equivalent to a human being.