I do not think it is just AI telling people things, but some believe it to be THE answer and not just a tool to help look at all the sides. It will get worse once kids lean on it and grow with it- same as cell phones and social media. We are not teaching people how to think and it is easier to just ask someone to skip to the end with a TLDR.
We were already not teaching people how to think - that's the key problem and part of why AI has been such a big hit.
Across the West (and in the East too), we've seen a decline in the value put on critical thinking and analysis skills, and it was never super-high, even when it peaked in probably the early 1970s. And that decline isn't just in educational curriculums and so on, but across the culture as a whole. News, particularly, has found that people would rather be told how to think about an issue than encouraged to think about an issue critically. Hell, people will literally change channel or newspaper to find one that tells them how to think (c.f. The Times in the UK for example, possibly the most contemptuous "This is how you must think" paper in Britain, and I'm including the Mail in that!). And worse, completely fake and dishonest "balance" has lead even well-meaning news channels into just pushing absolute drivel as if it were on-par with science or logic. Laws and political systems are treated as if they're inconveniences to emotion-based goals, and if anyone tries to have a reasoned opinion or show the slightest nuance or complex, they're given short shrift, because emotion-based stuff, often completely irrational and obviously false, gets more views, and most interviewers seem to prefer worthless soundbites over substantive interviews/comments.
Frankly it's shocking we've lasted this long when much of our own media are basically trying to make us as stupid and emotional as possible, and refuse to hold politicians and corporate spokespeople and so on to account, because they might require some effort and would get fewer views than just letting them spew nonsense.
AI obviously has the potential to compound this issue, but I'm not sure it's as big of a change as you're suggesting. I think it's even possible it'll actually turn out that future generations use it more responsibly, having grown up with it, than current ones do. Not that there won't be problems in the meantime.