The NuTSR thread attests to this many, many times over.This is the "Geek Talk & Media" sub-forum.
1) genAI is geek stuff.
2) We have law geeks here, too.
The NuTSR thread attests to this many, many times over.This is the "Geek Talk & Media" sub-forum.
1) genAI is geek stuff.
2) We have law geeks here, too.
No, dude, you do not, in fact, have to take crystal meth to do your job.
It feels like to me that the people most qualified to do quality control would be the people who are actually qualified to do the work that the AI is purporting to replace!It’s not just law that’s hallucinating. It’s medical staff, scientists, engineers- you know, those topics that takes years of study and seconds to sabotage with one prompt.
My whole company is at a crossroads on this stuff. Quality control on AI output has become the new demand. Only problem is quality control requires years of real world experience, and how are today’s graduates going to get that experience if we have AI do the grunt work? It’s a conundrum with no clear path forward.
Yeah. As a doctor I’ve had plenty of time for Dr Google (patients looking up symptoms and conditions on Google) because I certainly don’t blame them for seeking more information or clarification. But now a small but significant chunk of my job involves explaining to patients that Dr Google hallucinates 20-40% of the time.It’s not just law that’s hallucinating. It’s medical staff, scientists, engineers- you know, those topics that takes years of study and seconds to sabotage with one prompt.
My whole company is at a crossroads on this stuff. Quality control on AI output has become the new demand. Only problem is quality control requires years of real world experience, and how are today’s graduates going to get that experience if we have AI do the grunt work? It’s a conundrum with no clear path forward.
You only use something that is 80% effective in cases where being wrong 20% of the time doesn't hurt you (pulling the numbers out of my butt, but I believe I've heard similar in articles). In law, being wrong can result in incarceration or, at the very least, just plain losing. In medicine being wrong can result in the ultimate price being paid. In finance, ruin.A technology that multiplies one's output, even if failing sometimes, will be terribly appealing to most. Especially in countries where lawyers are expensive. When I read that people are being bullied by companies legal claims because they can't afford a lawyer or renounce their rights because they can't afford to enter the judicial system to get their rights enforced, I am all in favour of any tool diminishing the cost of the lawyer... Sure, the quirks need to be ironed out, as with any tool. This process can happen as the tool is being adopted. Many people died in car crashes before we had safety belt and airbags, yet many people ditched their riding horses before the generalization of airbags.
You only use something that is 80% effective in cases where being wrong 20% of the time doesn't hurt you (pulling the numbers out of my butt, but I believe I've heard similar in articles). In law, being wrong can result in incarceration or, at the very least, just plain losing. In medicine being wrong can result in the ultimate price being paid. In finance, ruin.