I suggest that you look into some of the recent experimental research being done with LLM AIs and novel problem solving. They are not simply parroting, as most of us, including me, initially characterized them. They are running into limitations and evolving their own solutions by repurposing their design in completely unexpected, unprogrammed ways. As in the example above.Yeah, as someone trained in academic philosophy, the capabilities of LLM have actually reinforced the uniqueness of human rationality, not challenged it in any way.
Trying to talk the chatbot into actually reaching valid logical conclusions about anything show it's limitations really quick.
Evidence now suggests that they are, in fact, creating their own reality models in order to solve tasks that would seem to be beyond their designed parameters.
As for the uniqueness of human rationality...eh. What does "uniqueness" really mean? Every human mind is a unique thing in the universe, sure, and some of the processes that lead to human problem solving may be fairly exclusive to human brains. But that doesn't mean that these are the only ways to achieve the same, or a similar outcome. LLM AI is demonstrably able to achieve outcomes that were until very recently thought to be uniquely human, including writing scripts and screenplays that are or will soon be competitive with most human products.
This is not as high a bar as once thought. Which, if you've read a typical sitcom script, is probably not a total shocker. Either way, as long as we have Jenna Ortega to step in and make some fixes, the finished product will be alright.