trappedslider
Legend
reminded me of this article MIT researchers use large language models to flag problems in complex systemsNot here, thankfully.
No, those LLMs aren’t actually thinking. They’re not minds. They spit out a word based on the percentages. Nothing more.