While it's a vivid cautionary example, don't the same concerns and cautions here apply broadly to internet search tools like Google?
And the end result of the search, websites, which are all presenting things in a way that people are inclined to trust... especially websites with an agenda. But people have learnt that "I've seen on the Internet" is a warning flag, ever since the quote "Abraham Lincoln said never to trust anything on the Internet." appeared on the Internet...
Are LLMs more worrisome because they involve less work sorting through results, and thus fewer obvious points for the user to read critically and assess the sources?
They are new. And outside of academia, a lot of people might not bother to check sources and trust their search engine like a gospel. It's reasonable for things where the overwhelming majority of sources will be right (if you ask the oxgen ratio in the air, there is a low chance you'll get a wrong answer), comforting that initial belief. People need some time to adjust to any new technology. They adjusted to state-controlled information (by listening to two different national radios), they adjusted to privately-controlled information (by assigning trust values to different medias), they adjusted to wikipedia, they'll need time to adjust to AI and know that they need to check the sources for anything important to use the tool correctly.
Last edited: