I find the earnestness with which the journalists are astonished by this is the video comical because I assumed everybody already had some (usually less elaborate and extensive) version of this experience with AI one to three years ago, but I suppose a television audience will have lots of people who have no firsthand experience. The doubledown gaslight is a less common AI output when pressed than correcting itself to agree with you (whether or not you're actually correct), but you use these things enough and it will happen to you.
The usefullness of large language models are that they are workhorses with near instantaneous results, not that they are reliable, truthful, or knowledgeable. They are assistants that you need to monitor carefully and verify wherever it might actually matter. Never use AI as a source for information about anything consequential. In terms of presenting factual information, it's great (or adequate, which for people under severe time crunch equals great) for rough drafting writing about a topic you already know about and can edit it's output on, and is particularly useful in as much as it will probably remember something important on the topic that you would have forgot to mention. It can be very useful in helping you find topics you need to research further. But if you are asking it questions like its some sort of oracle, and believing its answers, you are using it wrong (even if the folks hawking it encourage you to ask it questions like its some sort of oracle).