Judge decides case based on AI-hallucinated case law

My prior for how to sanction LLM use is to apply the same standards you would without LLM use. It is the responsibility of the user to use it responsibly.
This is exactly what my org’s stance is at the moment. Any LLM use requires a manual intervention or approval, and the user of the LLM is responsible for the content, just as if they had not used AI to generate it.

AI companies are pushing autonomous agents heavily at the moment — agents that take action without human intervention. The guidelines I’ve developed state that these are high risk and require review by a committee at the Vice President level. So far we’ve not seen anyone seriously want that ability, but I can see us allowing it for cases where the consequences of failure are minimal. An example might be flagging a bill for a procedure with a certain code. If that is wrong, worst case is it gets bounced back from the insurers or we end up paying more than we should. So I can see us allowing it in the future for some things, but it ups the risk of using LLMs considerably.
 

log in or register to remove this ad

This is exactly what my org’s stance is at the moment. Any LLM use requires a manual intervention or approval, and the user of the LLM is responsible for the content, just as if they had not used AI to generate it.

AI companies are pushing autonomous agents heavily at the moment — agents that take action without human intervention. The guidelines I’ve developed state that these are high risk and require review by a committee at the Vice President level. So far we’ve not seen anyone seriously want that ability, but I can see us allowing it for cases where the consequences of failure are minimal. An example might be flagging a bill for a procedure with a certain code. If that is wrong, worst case is it gets bounced back from the insurers or we end up paying more than we should. So I can see us allowing it in the future for some things, but it ups the risk of using LLMs considerably.
How are you dealing with any privacy concerns? In my org we are bound by FIPPA and anyone who fed private data to a public LLM would get roasted alive, for the legal exposure we would face.
 

I saw that! But 10k was it just reminding her that she had 10k sitting in an account. I suppose that qualifies as help, but not in the way people are probably hoping. :P


"But other ChatGPT answers were much more fruitful for Allan, including one idea to search for money she may have forgotten about in apps she had on her phone.

"My husband was actually like, 'Oh, didn't we have a brokerage account?'" Allan recalled.

"There's $10,200 sitting in this account that is available. Like I could literally cry right now," she said in a TikTok video."
 




Here’s the thing, though: nearly every safety law, public health measure, OSHA regulation, professional conduct standard, or licensing requirement gets some kind of pushback. Some frame it as an insult to their intellect and autonomy (where none was meant), or a government overreach. Some merely ignore them and do as they please.*
It doesn't help that a lot of them get proposed because someone or a lot of someones were doing something stupid. Arizona's
"stupid motorist law" as an example.
 

It doesn't help that a lot of them get proposed because someone or a lot of someones were doing something stupid. Arizona's
"stupid motorist law" as an example.

I'd say it's an example of the opposite. Instead of imposing a regulation on everyone for the stupidity of a few (for example, cars are dangerous, let's ban cars), it's a kind of law that responsabilize them (you want to be a moron with a car, sure, go for it, but then bear the consequences). A good reason for the first would be when the reckless conduct of a the morons would endanger other people (for example, if we allow people to play with fire in a forest, it might burn a house later), which is different from protecting peole against themselves.
 


I agree, but it was AI vs. telehealth. I wonder if AI vs. in person doctor visits would yield different numbers.

This is a valid observation, though (unfortunately?) telehealth is probably the way of the future in developped countries (aging population with more health need and lowering younger population among which to train doctors).

The difference in safety between in-person visit and telehealth is apparently quite low for the perception of the patient, but I haven't found (in a cursory search) a study focusing on diagnosis:


This article (Telehealth, in-person diagnoses match up nearly 90% of the time) seems satisfied by a 86% match in diagnosis between in-person and online diagnostic, which I fail to understand (it seems abysmally low to be satisfied).
 

Pets & Sidekicks

Remove ads

Top