In the UK they are encouraging the use of AI in quasi-judicial decision making, and have just realised that it may work for both sides of an issue.
Tools that help people scan applications and find grounds for objection have potential to hit government’s housebuilding plans
www.theguardian.com
This is what happens when the self-absorbed are in charge. The issue isn't the use of LLMs as such, but rather the lack of empathy and care on the part of both users and designers.
However, as the hammer example demonstrates, people are quite good at understanding the intention of a constructed object, and using it appropriately. What is less common is the understanding of how an object can be misused or abused. That requires a kind of creative and counter-factual thought that it seems many people do not engage in.
I argue that LLMs are designed to supress that kind of thought because it is connected to what actually is, but it has not been said before, and so is unlikely to appear from the algorithm. The algorithm may produce texts that have not been said before, perhaps even many such texts, but it is by design unable to filter them through checking against what is.
This aspect of speech is neglected in English language. There are languages that explicitly code evidentiary certainty in their grammar, but as LLMs currently stand those grammatical constructions are meaningless in the texts they produce. The algorithm simply cannot do this, because it has no knowledge, nor any capacity to possess such knowledge.
When we understand this, it is easier to understand that all of what an LLM produces is meaningless outside of its own text, despite appearing to refer to things.
EDIT: typos