Gorgon Zee
Hero
This is exactly what my org’s stance is at the moment. Any LLM use requires a manual intervention or approval, and the user of the LLM is responsible for the content, just as if they had not used AI to generate it.My prior for how to sanction LLM use is to apply the same standards you would without LLM use. It is the responsibility of the user to use it responsibly.
AI companies are pushing autonomous agents heavily at the moment — agents that take action without human intervention. The guidelines I’ve developed state that these are high risk and require review by a committee at the Vice President level. So far we’ve not seen anyone seriously want that ability, but I can see us allowing it for cases where the consequences of failure are minimal. An example might be flagging a bill for a procedure with a certain code. If that is wrong, worst case is it gets bounced back from the insurers or we end up paying more than we should. So I can see us allowing it in the future for some things, but it ups the risk of using LLMs considerably.