Judge decides case based on AI-hallucinated case law

How are you dealing with any privacy concerns? In my org we are bound by FIPPA and anyone who fed private data to a public LLM would get roasted alive, for the legal exposure we would face.
Only approved tools can be used for data that is private, sensitive or restricted (includes PHI/PII, financial, etc. -- basically anything not public). Approval is by a committee including me, our VP for compliance and our VP for security. In general, to be approved a tool must:
  • Secure our data subject to HIPAA and other requirements.
  • Not use our data to train a model. Or if they do, secure that model so we are the only ones with access to it.
  • We typically (always?) require a contract that defines these requirements.
The second point is often a sticking point. Using our data to train their models is very valuable for another company -- it builds their IP. However, my current understanding of the law is that you cannot actually do this without the consent of the people whose data you are sharing: If we use their data to improve care of OUR patients; it's OK -- but using their data in ways that will not benefit them requires consent.

I am not a lawyer: The above is my understanding of the law and I believe that others may have a different / more lenient view. So do not take the above as fact, but simply as the interpretation that I use in my daily work.
 

log in or register to remove this ad


I heard a story on the radio that young people were turning to AI to write Dear John letters, responses to text messages where the other person was emotionally hurt, etc.
“Dear John,

It’s been a wonderful 6 years we’ve had together, and I wouldn’t trade them for the world. But I’m struggling with the demands your job is placing on you- the many weeks of travel you’re required to make are straining our relationship. I can’t stand the loneliness I feel while you’re away selling blahaj in Mordor, or bronies in Lankhmar.

And I’ve found comfort in the arms of our neighbor, Hastur.

So this is it.

Love,

Jane”
 

Once lost, I don't think you can regain public trust over a reasonable time, since well, you're showing the measures you supported are effective, but why would they trust your figures, since they don't consider you trustworthy? If people don't trust you about vaccines, they won't trust you showing that vaccines are useful (and quote a random website saying that vaccine contain a 5G mind-controlling chip or something)

If they didn't get to their position by careful consideration of reliable data, careful consideration of reliable data is unlikely to get them out of that position either.
 

This is exactly what my org’s stance is at the moment. Any LLM use requires a manual intervention or approval, and the user of the LLM is responsible for the content, just as if they had not used AI to generate it.

My organization (which has HIPAA-concerns) is more strict. We DO NOT use publicly available generative AI on work. Period. Not at all. Forbidden. But then, I can't connect my work laptop to Gmail, othere external e-mail or file-sharing site (like, say, Dropbox) or any social media site, including EN World, either. I cannot even use a thumb drive to transfer files to and from my laptop under most circumstances.

There's an internal genAI that has been set up that has no access to the outside internet that can be used for work, but since it only has access to internal source, it is notably limited.

We are encouraged to play with genAI on our own, to enhance our own understanding for possible future implementations.
 

This article (Telehealth, in-person diagnoses match up nearly 90% of the time) seems satisfied by a 86% match in diagnosis between in-person and online diagnostic, which I fail to understand (it seems abysmally low to be satisfied).
So, I have been doing exactly this sort of analysis for a few months now. I learned early on that there are about 70,000 diagnoses, and when you do a diagnosis, you usually specify a single main diagnosis, and any number of secondary ones. So "a match" is actually hard to specify.
  • From a doctor's POV, if the main and (first) secondary diagnoses are switched, that's often pretty irrelevant, especially if they are inter-related.
  • There are a lot of fine differences diagnosis codes. For example, if you are pregnant and have high blood pressure, you could have a diagnosis of "hypertension" or "Pre-existing hypertension complicating pregnancy, childbirth and the puerperium", or "Pre-existing hypertensive heart and chronic kidney disease complicating pregnancy, childbirth and the puerperium". Or is it a "cough" or an "acute cough"
With these factors, it's not surprising that 90% is actually a pretty good number. I've tried simple matching of primary diagnosis, which gives correlations in the 80-90% range for various combinations of doctor, billing and LLM diagnoses. I've created a measure that accounts for both secondary diagnosis and family similarities in diagnoses (using a weighted Jaccard measure), and that improves the numbers into the 85%-95% range. Reading the cited paper, I'm pretty sure they just use the simple measure.

So when you see a paper that says "diagnosis match 90% of the time" it's not like "one doctor thinks it's Lupus and another an infection", it's more like "one doctor thinks it's hypertension and heart diseases affecting the pregnancy and another thinks it's just the hypertension, but the patient also has heart disease".
 

The difference in safety between in-person visit and telehealth is apparently quite low for the perception of the patient, but I haven't found (in a cursory search) a study focusing on diagnosis:

I wouldn't expect to see such yet. Broadly, diagnosis calls for information that cannot be gained by talking to you. Until we have a telehealth system that can do blood and urine analysis and do medical imaging, you won't see much telehealth diagnosis.

Telehealth is useful for continuing care once diagnosis has been made - you can often adjust and update continuing care instructions based on the patient's subjective reports.
 

I wouldn't expect to see such yet. Broadly, diagnosis calls for information that cannot be gained by talking to you. Until we have a telehealth system that can do blood and urine analysis and do medical imaging, you won't see much telehealth diagnosis.

Telehealth is useful for continuing care once diagnosis has been made - you can often adjust and update continuing care instructions based on the patient's subjective reports.

Sure, but I don't know exactly how telehealth system work, but if they are drop in replacement for general practitioner appointment, the latter just send you to a laboratory for most exams (like blood and urine samples), gets the result and then discuss it back to you. It is possible that some doctors (or some medical systems) have the GP doing that, but this hasn't be my experience. The only test the GP has ever made listening with a stethoscope and measuring blood pressure. Still, requiring a lab appointment defeats the point of telehealth, but if it can treat the like 90% of "little illness" like a cold or something, it might become helpful.
 

Still, requiring a lab appointment defeats the point of telehealth, but if it can treat the like 90% of "little illness" like a cold or something, it might become helpful.
Ideally, what telehealth does is project training and knowledge into spaces far removed from it. If the top practitioners can consult with locals, that usually results in better overall healthcare results.

(Amusingly, some of the best results are seen in veterinary practice, where a vet in Wisconsin can contact a renown specialist in herpetology over an exotic snake’s venom, or a zoo can find out why their binturong isn’t eating properly from an expert from their native habitat.)

But telehealth only goes so far. A lot of diagnoses are based on what a health care professional can see, hear, or smell, and computer tech can’t deliver that right now. Lab tests are more for confirmation than diagnosis in those cases.

For example, my Mom’s first appointment with a certain specialist resulted in a diagnosis in seconds of her initial visit, based on his experience seeing similar symptoms over decades. Certain afflictions cause distinctive respiratory effects, like Whooping Cough. Fruity breath can be an early indicator of diabetes, and many other conditions can have distinctive odors, like GI bleeds, a major urinary tract infection, and an infection in which necrosis or gangrene has set in.
 

Ideally, what telehealth does is project training and knowledge into spaces far removed from it. If the top practitioners can consult with locals, that usually results in better overall healthcare results.

(Amusingly, some of the best results are seen in veterinary practice, where a vet in Wisconsin can contact a renown specialist in herpetology over an exotic snake’s venom, or a zoo can find out why their binturong isn’t eating properly from an expert from their native habitat.)

But telehealth only goes so far. A lot of diagnoses are based on what a health care professional can see, hear, or smell, and computer tech can’t deliver that right now. Lab tests are more for confirmation than diagnosis in those cases.

For example, my Mom’s first appointment with a certain specialist resulted in a diagnosis in seconds of her initial visit, based on his experience seeing similar symptoms over decades. Certain afflictions cause distinctive respiratory effects, like Whooping Cough. Fruity breath can be an early indicator of diabetes, and many other conditions can have distinctive odors, like GI bleeds, a major urinary tract infection, and an infection in which necrosis or gangrene has set in.
If you call Health Connect Ontario, in Canada, the phone is answered by a RN. They give basic health advice and referral to a medical professional.
 

Pets & Sidekicks

Remove ads

Top