I was reading a paywalled article on CNN- via someone else’s machine*- that was exposing ELSA (the FDA’s own AI) as being as prone to hallucinations and misrepresentation as general purpose LLMs. Some of the employees interviewed basically said that ELSA was not saving them time because they had to be extra diligent in evaluating its results.
*
*
