FrogReaver
As long as i get to be the frog
@Gorgon Zee brought forth the main points of my would be reply much more eloquently than me. So I’ll leave it at that.And you won't see a lot of difference between "seems like a friend" and "is a friend" until after you find out they've been skimming from your wallet. Teh difference is in details.
A passage can seem coherent when each sentence looks grammatically sound, and individually reads well enough. But they fail to be actually coherent when strung together if one does not logically follow from another, information is missing, there is repetition that is not internally consistent, or topics change without explanation.
I do not have an actual example document at hand, and this discussion is not important enough to me to go and build one myself for supporting my position.
I have, however, read narratives created by ChatGPT, for example. You may start with Greg, Sam, and Beth as characters, but on page two Sally appears with no note of who they are or how they got there. Greg's hair color changes several times over the course of several pages, and in one paragraph we are told that they are driving along in Beth's VW, and a page later they are standing in an ice cream parlor with no transition.
The AI can't form a narrative, in which event A causally leads to B leads to C. Because, when it is forming paragraph 17, it is not referring to any prior paragraphs for content, context or continuity, because a LLM doesn't construct its bits based on content, context, or continuity. It doesn't have a concept of causality, of an "event" in a narrative that has "impact and consequences" later in the narrative. It doesn't have the concept of a character that is a person who needs to have consistency of personality or behavior or desires, etc.
The LLM is effectively only doing short-range pattern matching of words and punctuation. The semantic content isn't relevant.
You don't see the problem???
Well, let me ask you - how many people marry their laptops? Pretty much none, right? And the ones that try to do so would be looked at as... strange, right? Ergo, the person-to-person relationship is not the same as the person-to-machine relationship. Therein lies the problem.
What people expect from other people, and what they expect from machines are not the same. We know that our fellow humans can be unreliable. But, we tend to expect our machines to be reliable at what they do. That's pretty much the entire point of having a machine to do stuff for you rather than have another human do it.
I've already laid that out in broad strokes - the LLM does not have abstract thought, or understanding of the content of the words it is putting out.
One thing I would push back on toward you both is that without a strong theory of human cognition/thought it’s hard to really say whether machines think/understand or not. It’s been an open question in computer science for a long time. Probably the most well known proposal to the question has been the Turing test. But even at best that would just show an amazing similarity of machine responses to human responses, something ChatGPT already amazes us with. Anyways, back to the main point. We really don’t understand enough about thought to properly define it in the first place. And then there’s always the possibility of conflation of human thought as the only type of thought.
I mean what would a thinking machine look like, what could it do to prove it thinks. What can you do to prove you think?
What does the human brain do to understand language? Can we even explain that? Etc.
I think we are a long way from definitive statements on almost any of this.