It's a video saying, with a lot of emphasis and storytelling, how the reporter discovered that AI hallucinations are a thing, proceed to illustrate it to its co-host who seem to discover warm water as well as him, leading to the conclusion that it will change our lives and workplace but we need to be careful, accompanied by a commentary saying : "AI slop do exist, this is one of the examples".
This isn't a thread about the evil of AI in general: it ends with the reporter implying he'll keep using it (possibly more carefully than before and maybe he'll be educating himself on how to use the tool to lessen occurences of AI errors -- one he had obviously no trouble detecting and correcting despite his professed total ignorance of the topic). A video about the evil of AI would end with a warning like: "do not use it". I agree that the storytelling about the information is certainly pointing to a message like "look, AI is baaaaad" in a click-baity way, but that's reporting nowadays.
The accompanying commentary "here is a single example of AI slop" proves that AI hallucinations exist, which is probably denied by noone. It isn't implying this is a (-) thread on AI, but a discussion about AI hallucinations, in which an opinion like "sure they exist, but their prevalence is uncommon enough given adequate precaution that AI is still making a significant value offering for specific uses" sounds perfectly adequate.
If I were posting an illustration of using a skill challenge in my D&D game where it went badly, and concluding that skill challenge is certainly a part of the popular rules but it's not great all the time, wouldn't you think that someone saying "my experience with skill challenge is overall better than yours and despite the flaw you experimented, it's nonetheless a good mechanics" on topic?