(+) A.I in general

I think his error was to use a general LLM (trained on a large variety of sources to be able to sound correct statistically on most topics) rather than a dedicated model for law. Diagnosing AI are working better than a AI-less human, as evidenced earlier, but that doesn't mean that you can chat with chatgpt about how you feel and get a diagnose.

Also (and this is a big one) medical assistance AIs are generally not just LLMs trained on medical texts.

"A.I." is a broad category. Generative AI is only corner of the overall space.
 

log in or register to remove this ad

I think his error was to use a general LLM (trained on a large variety of sources to be able to sound correct statistically on most topics) rather than a dedicated model for law. Diagnosing AI are working better than a AI-less human, as evidenced earlier, but that doesn't mean that you can chat with chatgpt about how you feel and get a diagnose.

Generally-trained models are at the intern level at best: you need to re-read everything they made and not rely on them for anything critical. But specialized AI models can do much better.

There will be some point where general models will have enough data to be there, but I am not sure they'll be runnable on home computer in the short run.

I've experimented with the "legal" AIs pushed by the two biggest players in the industry.

"Not good..." Is a massive understatement. They may get there, but right now? Not even close.
 

Also (and this is a big one) medical assistance AIs are generally not just LLMs trained on medical texts.
"A.I." is a broad category. Generative AI is only corner of the overall space.
Very much so. I am involved in a number of Generative AI projects in Healthcare (this is one of them), but most of the systems labeled as "AI" use rule-based models, neural nets, or other what I am now referring to as "traditional ML" models.

Part of my work is to review incoming technology for increased risk due to AI, and I nearly always have to contact the vendor directly to find out if their "AI powered models" actually use any real AI. For a while I was trying only to use the term "AI" for models that replicate or simulate human thinking, but that boat seems to have sailed. From now on, it's just a generic term for what used to called a predictive model, a machine learning model, or just "a model".
 

Yes, there has been a tendancy recently to label AI a lot of things, including things that aren't even remotely close to AI, like expert systems using rule-based methodology. I don't know why (I mean, if they were speaking to investors, it would be to get more money from them, so why not, but it is in presentations where there is very little to gain. Unless they think labelling AI is helping to sell something old as something new, maybe?
 

Yes, there has been a tendancy recently to label AI a lot of things, including things that aren't even remotely close to AI, like expert systems using rule-based methodology. I don't know why (I mean, if they were speaking to investors, it would be to get more money from them, so why not, but it is in presentations where there is very little to gain. Unless they think labelling AI is helping to sell something old as something new, maybe?
AI is the new buzz term companies are hoping to profit from. My thought? If true AI was in all the things the marketers are telling us it's in? That would be kind of terrifying.

Of course, the truly terrifying thing for me currently - the high amount people seem to be WILLING to push onto AI - when it is currently just so bad at most of them!
 

I'm dreading the new Amazon update to my Kindle Scribe. Apparently they now have A.I. with the Scribe and it's going to update the old Scribes to be able to use it.

Everyone seems so excited about it, but I dread it. All I can see is something that summarizes notes if you take them (I don't, I already have a tablet and PC with touch screen if I really want that stuff, I use the Kindle for reading, the scribe has a bigger screen so makes enlarging words easier and nicer on these old eyes) and may use more battery power.

I want my battery to last longer...not have an energy sucking A.I. use it up faster. I hope they have a way to turn it off.
 

Yes, there has been a tendancy recently to label AI a lot of things, including things that aren't even remotely close to AI, like expert systems using rule-based methodology. I don't know why (I mean, if they were speaking to investors, it would be to get more money from them, so why not, but it is in presentations where there is very little to gain. Unless they think labelling AI is helping to sell something old as something new, maybe?
Well, as far as I remember, I was taught that expert systems are a form of AI. Perhaps the oldest or simplest. I'm not going to say when, but it was not precisely yesterday.
 

I just noticed that the latest version of iOS has an AI that tries to ID your photos when you call up their data. So far, it’s usually ID in my Mom and myself correctly, but our dogs- all border collies- not so much.

One gets misidentified as an Aussie 85% of the time, with most of the rest being border collie or Shetland sheepdog. To be fair, though, she’s a blue merle. That’s an uncommon color in BCs, much more commonly found in Aussies.

Her former housemate is getting 50/50 BC & Sheltie.

EXCEPT there’s one picture of the two of them standing close to each other at a 90deg angle…which the AI thinks is an Old English Sheepdog.😜

And THEIR predecessor? My favorite picture of her, the AI doesn’t even ID her as a dog…or anything else.🤦🏾‍♂️
 

TBH, I am not sure I am seeing a dog on this picture. With context, I guess you're taking a picture of your favorite animal and I rule out any kind of new one-eared species just appearing on Earth, so I get it's a dog, but if you told me it was a dhole or a raccoon dog, I'd probably trust you.

For those who might be lukewarm about AI image generation because of copyright issue and not thinking that the TDM exception like is present in European or Singaporean laws is ethical enough, you'll be pleased that a model is currently being trained with 100% public domain image, either because copyright lapsed or because the copyright owner voluntarily put the image under a licence allowing training. The training runs are underway but the result are already correct, compared to the state-of-the-art model for image generation, Flux.

1733937817421.webp

1733937868878.webp

1733937883338.webp
 


Remove ads

Top