tomBitonti
Hero
Having spent a few years teaching elementary music and 5-6 classroom...
Yeah, the process we use in classroom looks a lot like that used in describing AI training. Repeatedly show correlated materials, and sooner or later, those correlations sink in. Reward successful output of the expected material given the input.
It's also interesting to note that the same training mode used with AI small multilayer nets works with ex-vivo neural cell organoids for playing videogames...
Which makes the LLM look like it may be even closer to how things work in the brain than many think.
The fundamental issue: Many humans believe humans are a unique clade with a unique place in the universe. This is known by several names... but the one I prefer is "Humanocentrism"... and "Exceptionalism of Humanity" has been used by a few researchers recently...
The evidence coming out of animal research is, more and more, humans are different in ability from other apes only as a matter of degree, not a matter of fundamental structural differences. And that many mammals are much closer than most people are comfortable thinking about. Many birds in the 1.5-5kg range are a lot smarter than people want to think, and the humanocentrists in the review process block a lot of papers from publication, simply by being overly dismissive with citing "clever Hans" and "pareidolia."
It is much easier for many people's world view to see humans as exceptions than to accept that we're just really smart animals, different only in degrees.¹ And for a great many, it's religious in origin.
To be clear — arguing that people and computers “think” differently is not the same as arguing that computers can or cannot achieve consciousness, either in practice or in principle. I believe the first (computers think differently) but not the second (I believe that computers can achieve consciousness, in principle. Although, I don’t think they are that close to it yet.)
TomB