But we weren’t talking about search engines. We were talking about a forum, full of humans (and some bots now I guess), where Johnny can post the question “How far can a dwarf in platemail who is carrying a heavy load and has Fly, Haste, and Expeditious Retreat cast on him - move in one round when it’s the full moon in the house of Atrius?” And other humans have to see that post. And some artificial untelligence jumps in to “summarize” previous arguments about it.
And this despite there being a FAQ pinned to the top of the forum that reads: “1. Dwarf movement in platmail (etc.)”
Yeah, Anon seems to be coming at this from a "technical documentation" standpoint. AI (and this was true even
before "generative" AI, note) is pretty good at searching technical documentation or legal documents, and digging out obscure stuff that's not a hallucination or misinformation (however, you always have to check the source, because sometimes it is just making it up - but with technical documentation that tends to be easy to do once you've found the function it's pointing you at, you can quickly tell if its hogwash or not).
That's not what's useful for TTRPGs though, typically. Usually rules are pretty easy to find and well-known, but rules discussions are often quite nuanced and often the rules-correct argument and the most sane argument are NOT the ones repeated most, in part because once someone gets it right, people tend to stop arguing.
So you'll often see a dozen nitwits repeat ill-informed misunderstandings of the rules, often quite loudly and at length, often using each other as backup, and then someone will come in with the right interpretation, and people will barely respond, but the nitwits will typically stop posting.
Gen AI, because it can't think at all, just mindlessly provide text based on how popular/repeated that text is (to put it very simply), can't cope well with this. Instead it'll often take the repeated nitwit position as gospel. Or worse still, it'll combine elements of the correct position, and a terrible idiot version of the position, sometimes nonsensically.
Or maybe no-one has really answered the issue, in which case Gen AI tends to just make up something based on similar-seeming discussions and usually doesn't even tell you that's what it did.