That's the troubling thing about generative AI: it turns out that a lot of tasks we thought required an understanding of "true human psychology" or some other uniquely human trait can be executed very competently by a machine.
There are two things I am confident of.
First, that humans have absolutely no understanding of what intelligence is or what is hard. We have assumed all through history until really the past few years that what was intelligent was what required rare skill in a human and usually lots and lots of training and study to develop - often training and study that was beyond the average human. So for example, we assumed playing chess well was hard and a mark of intelligence, or that being able paint an object was hard and a mark of intelligence. Early in AI development history we even had people teaching computers how to play chess on the assumption that if the computer could play chess, that the complexity of the software that played chess would hit some threshold where general intelligence would just naturally evolve.
Turns out that the things that we assume require intelligence are things that we are simply morons at. Multiplying two big numbers in your mind is no proof of intelligence because it's something is computationally very inexpensive, it's just that humans are generally morons at remembering numbers and if we see someone who is barely more functional than the normal moron level humans reach we go, "Wow, that's amazing." Statistics? Humans are so bad at statistics that it took them 3000 years after inventing math to even conceive of them, and they still are completely unable to understand them so they misuse them all the time.
The things that humans are almost universally good at so that almost every human can do a good job at them, we don't think of as being intelligence and because the skill is universal we don't prize it, but it turns out that those things we are almost universally good at are often much more impressive than the stuff we are bad at and have to find unusual members of the population to train into performing.
What this means from an AI researcher perspective is that the "easy jobs" that we want to automated away because we don't find them rewarding are the least likely ones to automate away, while the "hard jobs" that so many people assumed would be beyond AI's ability are very often the ones that are easiest to automate away because it won't be hard to create an AI better than someone who is really just slightly better than the average human moron. We've actually already done that decades ago when the job of "computer" was replaced by mechanical and electronic computers to such an extent that when we use the word "computer" we don't even think about a highly skilled and reasonably well-paid professional - we think about a machine. But more of that is coming.
And the second thing I'm absolutely confident of is that humans think that they are super special for no good reason, and that whenever you hear a human say "a computer will never be able to do that because it will never truly understand something the way a human can", there is a good chance we're already past that point. For me, this happened almost thirty years ago when I was watching Deep Blue's rematch against Kasparov, and there were multiple points in the play - most famously 36.axb5! axb5 37.Be4! - where Deep Blue passed the Turing test because everyone including Kasparov assumed that that deep understanding of what chess is about was beyond the capacity of a "soulless machine". Maybe even the bigger moment for me though was game 5 in the endgame that changed the world where Deep Blue secured a draw off a line of play that again not only caused commentators to suggest that Deep Blue was receiving human input, but which the live stream commentators did not even see the point of until Kasparov himself resigned because they couldn't see more than two or three moves in advance and were just assuming Deep Blue was confused. Like literally right up until the moment Kasparov resigned, you had people saying computers would never beat a human because they lacked some uniquely human trait.
The biggest danger of AI is not that they are going to act like a human. The biggest danger represented by an AI is that within just a few years they will be wittier, funnier, more knowledgeable, and more engaging than anyone you know. The real danger isn't Terminator where AI decides to fight us for mates and material possessions and power like it's another ape. The real danger of AI is WALL-E that we create AI that cares for us an pampers us and does everything for us.