Judge decides case based on AI-hallucinated case law


log in or register to remove this ad

In the UK they are encouraging the use of AI in quasi-judicial decision making, and have just realised that it may work for both sides of an issue.


This is what happens when the self-absorbed are in charge. The issue isn't the use of LLMs as such, but rather the lack of empathy and care on the part of both users and designers.

However, as the hammer example demonstrates, people are quite good at understanding the intention of a constructed object, and using it appropriately. What is less common is the understanding of how an object can be misused or abused. That requires a kind of creative and counter-factual thought that it seems many people do not engage in.

I argue that LLMs are designed to supress that kind of thought because it is connected to what actually is, but it has not been said before, and so is unlikely to appear from the algorithm. The algorithm may produce texts that have not been said before, perhaps even many such texts, but it is by design unable to filter them through checking against what is.

This aspect of speech is neglected in English language. There are languages that explicitly code evidentiary certainty in their grammar, but as LLMs currently stand those grammatical constructions are meaningless in the texts they produce. The algorithm simply cannot do this, because it has no knowledge, nor any capacity to possess such knowledge.

When we understand this, it is easier to understand that all of what an LLM produces is meaningless outside of its own text, despite appearing to refer to things.

EDIT: typos
 

In the UK they are encouraging the use of AI in quasi-judicial decision making, and have just realised that it may work for both sides of an issue.


This is what happens when the self-absorbed are in charge. The issue isn't the use of LLMs as such, but rather the lack of empathy and care on the part of both users and designers.

However, as the hammer example demonstrates, people are quite good at understanding the intention of a constructed object, and using it appropriately. What is less common is the understanding of how an object can be misused or abused. That requires a kind of creative and counter-factual thought that it seems many people do not engage in.

I argue that LLMs are designed to supress that kind of thought because it is connected to what actually is, but it has not been said before, and so is unlikely to appear from the algorithm. The algorithm may produce texts that have not been said before, perhaps even many such texts, but it is by design unable to filter them through checking against what is.

This aspect of speech is neglected in English language. There are languages that explicitly code evidentiary certainty in their grammar, but as LLMs currently stand those grammatical constructions are meaningless in the texts they produce. The algorithm simply cannot do this, because it has no knowledge, nor any capacity to possess such knowledge.

When we understand this, it is easier to understand that all of what an LLM produces is meaningless outside of its own text, despite appearing to refer to things.

EDIT: typos
I wonder which of the jurisdictions are encouraging the use of AI for decision making.

It also seems to me that the Objector mentioned in the Guardian article is running into the same issues with hallucinating cases. In England and Wales any KC caught using such a system like ChatGPT rather than doing their own research is liable to an investigation by the supervising bodies .
 


I wonder which of the jurisdictions are encouraging the use of AI for decision making.

The poster spoke of the UK, not specifically UK's courts. In which case he might be referring to the passing of the Data Use and Access Act 2025, which is touted to facilitate adoption of automated decision-making (ADM).


But I might have easily missed a push for it by juridictions, though I don't think it's reaching the "decision making" phase. In France, administrative courts are considering (meaning, beta-testing with interest, not going that way with certainty) using AI to screen court application for incomplete filings without having a judge reading them completely, but not rejecting the application outright (the system flags "this one seems to be missing X" or "this one does not concern an administrative court" to increase productivity, not replace the judge assessment. Same with using dedicate AI to help search database of administrative precedents and judicial doctrine articles quicker than by using a search engine: the goal isn't to write decision but inform the judges about relevant case and extract them from law databases for further use.
 
Last edited:

The poster spoke of the UK, not specifically UK's courts. In which case he might be referring to the passing of the Data Use and Access Act 2025, which is touted to facilitate adoption of automated decision-making (ADM).


But I might have easily missed a push for it by juridictions, though I don't think it's reaching the "decision making" phase. In France, administrative courts are considering (meaning, beta-testing) using AI to screen court application for incomplete filings without having a judge reading them completely, but not rejecting the application outright (the system flags "this one seems to be missing X" or "this one does not concern an administrative court" to increase productivity, not replace the judge assessment. Same with using dedicate AI to help search database of administrative precedents and judicial doctrine articles quicker than by using a search engine: the goal isn't to write decision but inform the judges about relevant case and extract them from law databases for further use.
I think the Automated decision-making aspect is more for things like being refused a loan application or something like that, not anything to do with decisions made by judges.
 

I think the Automated decision-making aspect is more for things like being refused a loan application or something like that, not anything to do with decisions made by judges.

Sure, I also don't see what "quasi-judicial decision-making" the OP mentionned specifically is, except that such a wording might exclude decisions made by judges.
 

Remove ads

Top