ChatGPT lies then gaslights reporter with fake transcript

But... that's the point, isn't it? That's even exactly what the video shows - if someone cannot trust the results, you actually aren't great at search! If you return things that don't exist, that's being BAD at search.

Especially when you are unreliable, AND several times more costly in energy use/computing power than regular search is. That's pretty much the opposite of "great" now isn't it?

It is important to note that LLMs don't actually "search". Where a traditional search engine is a combination of an exhaustively created and maintained catalog and lookup, an LLM is basically a very complicated text predictor. If the pieces of information you want happen to have been given sufficient weight when the thing was trained, you'll get your information. But if not, you will get whatever did happen to have the weight, with no regard whatsoever to what the content really is - which is where "hallucinations" come from.
It's great at certain types of searches. For example, translating plain language into a search. Example: today, a student wanted to know how to turn one type of audio file to another on her macbook, and Google was giving her confusing results so I told her to try Chat, which gave her step by step instructions.

In testing it on doing academic searches re. humanities topics, I haven't found that it produces hallucinatory references often, but it does tend to go to the same sources over and over (very iconic, heavily cited ones), even on quite disparate prompts.
 

log in or register to remove this ad

Remove ads

Top