I've withdrawn from this thread for a bit, but want to come back because I think something is being lost on both sides of the current debate. I will attempt to sketch out a middling position...
First:
LLMs do not function in a way resembling humans. There are some vague apparent similarities to how people without strong language skills will cobble together words they don't understand, but the same techniques can be done using blocks of wood.
This is 100% true. The right way to understand a LLM, imo, is not "thought". What it is doing is not analogous to how humans think. They can create things that
look like things that humans would have created, but they don't undergo the same process. They can't grasp universals; they don't have will.
And there is a big danger here because humans really like anthropomorphizing things. Obviously people have already done this with chatbots. But they also do it with stuffed toys or pets. (I remember a high school science teacher:
The atom is happy when it has a full valence shell...you know when I say happy I don't actually mean happy, right?)
So when something creates a simulacrum of text that looks like what a human produces, it is easy for humans to ascribe humanity to it.
My guess is that AI that can think for itself will exist within 10 years (though the general public might not be told about it for quite some time after). Sentience will take another 10-15 years as that abiity to think for itself is refined and massively augmented while the hardware (and, consequently, power requirements) are miniaturized to manageable sizes. Creativity will follow shortly after that.
For example, I don't think this will be the case. (At least with LLMs. I'll reserve judgement on hypothetical future technology).
Regarding creativity--I think creativity is better described as a process than an outcome. LLMs (or stable diffusion, or what they are using for the art? I don't have much knowledge of those) aren't capable of that process, even if what they create appears like the result of human creativity.
At the same time, as someone mentioned earlier, meaning is created both by the act of creation and the act of observing. We can find beauty where it was not intended, or in scenarios where there was no human intelligence involved. Think of images of Saturn, for example. (If the fact humans had to build a camera bothers you, think of a waterfall).
I think the outputs of AI can certainly appear, can certainly
be beautiful, even if they are not intelligent. No, I don't have a specific example in mind. Maybe they aren't there yet. Maybe they never will be. But if they can't, that's more a technological limitation than a fundamental limitation based on their generation method.
Y'all said the same thing about cryptocurrency, NFTs, THE BLOCKCHAIN!, etc.
I also want to reject this counterargument. The current hype around AI is almost certainly overblown. At the same time, they are already very useful technologies, in ways that crypto and NFTs are not.