MOST folks don’t. Most.And note how folks don't mistake a hammer for an all-purpose tool?
MOST folks don’t. Most.And note how folks don't mistake a hammer for an all-purpose tool?
I wonder which of the jurisdictions are encouraging the use of AI for decision making.In the UK they are encouraging the use of AI in quasi-judicial decision making, and have just realised that it may work for both sides of an issue.
![]()
AI-powered nimbyism could grind UK planning system to a halt, experts warn
Tools that help people scan applications and find grounds for objection have potential to hit government’s housebuilding planswww.theguardian.com
This is what happens when the self-absorbed are in charge. The issue isn't the use of LLMs as such, but rather the lack of empathy and care on the part of both users and designers.
However, as the hammer example demonstrates, people are quite good at understanding the intention of a constructed object, and using it appropriately. What is less common is the understanding of how an object can be misused or abused. That requires a kind of creative and counter-factual thought that it seems many people do not engage in.
I argue that LLMs are designed to supress that kind of thought because it is connected to what actually is, but it has not been said before, and so is unlikely to appear from the algorithm. The algorithm may produce texts that have not been said before, perhaps even many such texts, but it is by design unable to filter them through checking against what is.
This aspect of speech is neglected in English language. There are languages that explicitly code evidentiary certainty in their grammar, but as LLMs currently stand those grammatical constructions are meaningless in the texts they produce. The algorithm simply cannot do this, because it has no knowledge, nor any capacity to possess such knowledge.
When we understand this, it is easier to understand that all of what an LLM produces is meaningless outside of its own text, despite appearing to refer to things.
EDIT: typos
You wouldn't know them, they go to a different school.I wonder which of the jurisdictions are encouraging the use of AI for decision making.
I wonder which of the jurisdictions are encouraging the use of AI for decision making.
I think the Automated decision-making aspect is more for things like being refused a loan application or something like that, not anything to do with decisions made by judges.The poster spoke of the UK, not specifically UK's courts. In which case he might be referring to the passing of the Data Use and Access Act 2025, which is touted to facilitate adoption of automated decision-making (ADM).
![]()
The Data Use and Access Act 2025 (DUAA) - what does it mean for organisations?
This summarises the changes the DUAA makes to data protection law that may affect you if you’re an organisation using personal information.ico.org.uk
But I might have easily missed a push for it by juridictions, though I don't think it's reaching the "decision making" phase. In France, administrative courts are considering (meaning, beta-testing) using AI to screen court application for incomplete filings without having a judge reading them completely, but not rejecting the application outright (the system flags "this one seems to be missing X" or "this one does not concern an administrative court" to increase productivity, not replace the judge assessment. Same with using dedicate AI to help search database of administrative precedents and judicial doctrine articles quicker than by using a search engine: the goal isn't to write decision but inform the judges about relevant case and extract them from law databases for further use.
I think the Automated decision-making aspect is more for things like being refused a loan application or something like that, not anything to do with decisions made by judges.