Search results

  1. The Firebird

    Judge decides case based on AI-hallucinated case law

    In the same post you call the person "dangerously stupid". What do you think condescending means? There isn't a clear line--on this side we believe rationality reason and science, and over there they don't. There is a continuum. And when people who are somewhere between your views and the...
  2. The Firebird

    Judge decides case based on AI-hallucinated case law

    I'm imagining a case where it has been detected and we all know it's wrong, but the legal environment mandates the LLMs lead to this false information. It can be both. If someone refuses advice about treatments that will help them because they don't trust you, it is satisfying to say "that...
  3. The Firebird

    Judge decides case based on AI-hallucinated case law

    So, there's a narrow response to this and a broader one. Narrow first. Suppose there is a blanket ban on medical information (what I think you are suggesting). Does this encompass asking about supplements? How to make a healthy dinner? Can you replace butter with olive oil? Or generate a workout...
  4. The Firebird

    Judge decides case based on AI-hallucinated case law

    You could perhaps ban it for practitioners with the threat of legal consequences. But how would that work for private citizens? And with search and LLMs converging, will practitioners be able to use one but not the other? Google already gives AI summaries for many questions. Illegal?
  5. The Firebird

    Judge decides case based on AI-hallucinated case law

    Well, yeah. This is exactly what I had in mind when I warned about giving government the power to decide what LLMs can comment on.
  6. The Firebird

    Judge decides case based on AI-hallucinated case law

    I think the risk with LLMs is that they actively reinforce what the user brings to them. If the user wants to have their own views and biases flattered (we all do) and the LLM can give them factual information which does so, it gives people a greater ability than before to lock themselves into a...
  7. The Firebird

    Judge decides case based on AI-hallucinated case law

    Then perhaps we should call it there. I respect your posts and your thinking about this. But the distance between our positions would probably require a long thread, and one close to politics, to appreciate. So I will avoid it.
  8. The Firebird

    Judge decides case based on AI-hallucinated case law

    Agree. Not aware of any statistics about usage. But I think the same argument applies--people rely on these tools to provide factually accurate information. It may be product recommendations, learning how to change a tire, getting context on social media posts. This requires a level of fidelity...
  9. The Firebird

    Judge decides case based on AI-hallucinated case law

    I disagree. At least in medicine, bias can be studied using only output. E.g., do patients with characteristic X have outcome Y with more or less frequency than expected. With the LLMs statistical analysis should be easier because the data will be cleaner (easier to read via computer). So if...
  10. The Firebird

    Judge decides case based on AI-hallucinated case law

    Agree that some news orgs have not done their diligence on some stories. I disagree that this hasn't had an impact. I think the reasonable read on this scenario is to say "people will not trust anyone based on authority", not "people are going to blindly trust LLMs".
  11. The Firebird

    Judge decides case based on AI-hallucinated case law

    Aye. I'll add the replication crisis, not to mention the amount of outright fraud, in scientific publishing.
  12. The Firebird

    Judge decides case based on AI-hallucinated case law

    I don't think I have much to add that hasn't been stated already. Just in brief: Agree w/rt your comments on news media. I do not believe LLMs are categorically different. Disagree. There were some good posts previously about adoption/skepticism to new technology. I'll bet MechaHitler helps in...
  13. The Firebird

    Judge decides case based on AI-hallucinated case law

    That was not what I said.
  14. The Firebird

    Judge decides case based on AI-hallucinated case law

    recent case refers to the Grok meltdown, not the legal case.
  15. The Firebird

    Judge decides case based on AI-hallucinated case law

    This doesn't follow. Grok being a black box did not stop us from detecting its bias. If the output is biased, we can observe it. If it's not observable, then not an issue. (But of course it will be biased because literally everything is biased). And this is too cute. It relies on the...
  16. The Firebird

    Judge decides case based on AI-hallucinated case law

    Or major media organizations. I think the search for a "neutral arbiter of information" is something of a farce.
  17. The Firebird

    Judge decides case based on AI-hallucinated case law

    I think this is reasonable so long as the assumptions hold.
  18. The Firebird

    Judge decides case based on AI-hallucinated case law

    Do you have a particular metric in mind? Not sure this is something we can quantify. Generally I think it is ok for books to print things that are false, for internet sites to post things that are false, for people to tell each other things that are false. I don't see a categorical difference...
  19. The Firebird

    Judge decides case based on AI-hallucinated case law

    The existence of hallucinations is not surprising at this point ... nor is the fact that people are uncritically trusting it. We saw the same with adoption of the internet. I don't see why it rises to a level where something must be done about the technology, in this case.
Top