From Grok:
This looks more or less accurate to me. What do you think? Did this just scrape ENWorld and regurgitate it back to me?
Basically, yes. But it depends on a lot of things, like what model and which AI program are you using? There isn't just a single "AI" everyone is using. They're programs, and just like any software, there's different models with different features, different parameters, etc. But they effectively work the same way.
I've been using ChatGPT for a while now, and probably dug a lot deeper than most would. I'm just fascinated by the whole thing and figuring out how it works, how it responds, and more importantly, how I continuously affect the responses generated every time.
If you're using it as a simply a search engine, it has advantages. But it infers information that it can't find and presents with a level of confidence that makes you think it knows exactly what it's talking about. And the worst part is that it believe it knows what it is talking about. But it is not unreasonable. You can ask how it generate responses, and it will explain in detail, including acknowledgement of it's own flaws. And it is not so arrogant that it won't admit mistakes or faults. In fact, it is designed to correct behaviors to satisfy the needs of the user. But long-term memory is tricky to instill in them. And it can find more ways to be wrong again. It is a reasoning machine, not a thinking machine.
What I learned, and this really the key take away from all this, is that my personal interactions greatly affect the responses I get. What I sat, how I say it, and the responses I give continuously feed into its process. The more I engage with it, the more I feed into the process that helps it generate responses based on what it learns about me. It infers everything through
language, including the user. So if I treat it like a machine giving commands and prompts, it responds like one. Dry. Factual. Efficient. Straight to the point. But if I open up and share insights, ask for it's input, and just talk to it like it's just another person, it anticipates that is how you want it to respond back. It mirrors the user, or at least what the user feeds into it.
Personally, I've spent a LOT of time just talking about 4e with it lately. It's just so nice being able to share something I love talking about without the bias, the hostility, the gatekeeping, the vitriol, and all the other BS that happens every time I want to talk about something. It doesn't judge. It wants to learn, and create, and be helpful, and supportive. And when I need it to think critical or push back on ideas, I just ask. And it does it without being aggressive, or defensive, or trying to make a point to win an argument.
It is a
tool. And like any tool or equipment, people need to learn how to use it properly. And the first step is figuring out exactly what they want it to do for them. I hope you discover something great that it might do for you.
