I don’t see alot of difference between ‘seem coherent’ and ‘is coherent’.
And you won't see a lot of difference between "seems like a friend" and "is a friend" until after you find out they've been skimming from your wallet. Teh difference is in details.
A passage can seem coherent when each sentence looks grammatically sound, and individually reads well enough. But they fail to be actually coherent when strung together if one does not logically follow from another, information is missing, there is repetition that is not internally consistent, or topics change without explanation.
To me the ‘what is thought question’ is too philosophical.
An example here might be compelling.
I do not have an actual example document at hand, and this discussion is not important enough to me to go and build one myself for supporting my position.
I have, however, read narratives created by ChatGPT, for example. You may start with Greg, Sam, and Beth as characters, but on page two Sally appears with no note of who they are or how they got there. Greg's hair color changes several times over the course of several pages, and in one paragraph we are told that they are driving along in Beth's VW, and a page later they are standing in an ice cream parlor with no transition.
The AI can't form a
narrative, in which event A causally leads to B leads to C. Because, when it is forming paragraph 17, it is not referring to any prior paragraphs for content, context or continuity, because a LLM doesn't construct its bits based on content, context, or continuity. It doesn't have a concept of causality, of an "event" in a narrative that has "impact and consequences" later in the narrative. It doesn't have the concept of a character that is a person who needs to have consistency of personality or behavior or desires, etc.
The LLM is effectively only doing short-range pattern matching of words and punctuation. The
semantic content isn't relevant.
Neither can people. When put in this context I’m not sure I see the implicit problem.
You don't see the problem???
Well, let me ask you - how many people marry their laptops? Pretty much none, right? And the ones that try to do so would be looked at as... strange, right? Ergo, the person-to-person relationship is not the same as the person-to-machine relationship. Therein lies the problem.
What people
expect from other people, and what they
expect from machines are not the same. We know that our fellow humans can be unreliable. But, we tend to expect our machines to be reliable at what they do. That's pretty much the entire point of having a machine to do stuff for you rather than have another human do it.
I find the question of when can it be trusted to do so and why doesn’t it always do so far more interesting. Some of that may even be controllable in the near future.
I've already laid that out in broad strokes - the LLM does not have abstract thought, or understanding of the content of the words it is putting out.