For new GMs, of course the best way to do things is to read or watch about them and learn over time. However, in my experience, if the best way to do something takes a lot of time and work (even if that work is fun), an immediate solution to the situation is going to win almost all of the time. As a simple example, you could read a book or just watch some YouTube videos on the topic. Which is going to be more likely?
The best thing to do, in my opinion, is to do both. Read and watch videos on how to be a better GM. Watch actual play videos to see what different GMs do. And then run a lot of games, talking with your players about the experience. And supplement that with books with story ideas in tables... or AI or any other tools that can help. Get some story cards! Get story dice! Do it all. The point is to consider everything as just tools in your toolbelt.
My big problem with using generative AI (hereafter, ChatGPT, but it applies to any model), even as a supplement, is that I find more than half its output is....not good, for any of various reasons. It's primarily useful for people who
already have the skills to filter out the nuggets of gold from the grit: a soundboard, a neutral faucet of ideas sourced from the stone soup of "stuff people have said online", etc. It's really really sensitive to its prompts, so you need to know what information is relevant and truly important, and what information is just filling up ChatGPT's memory with useless fluff. The time horizon (or, more accurately, data horizon) means that there's only so far you can work with it before it starts losing the thread, so it can't be used as a long-term aid, which is what a green neophyte DM would benefit most from.
I'm not saying that I think there's absolutely no place whatsoever for generative text AI stuff. I literally used ChatGPT last night because I've been unwell and suffering insomnia this past week, so I didn't have time to do the prep work I wanted to do. This was the first time it had ever hit above that 50% batting average mentioned above, so I was pretty pleased, but
even then I still needed to do a lot of heavy lifting to translate its suggestions into usable, functional campaign elements. I have the skill to do that, because I forced myself to learn how, by intentionally
not doing more than baseline/core prep-work for this, my first true campaign as a GM.
An inexperienced, fresh GM trying to weave these things into their play is going to suffer from exactly the same sorts of issues as, say, an inexperienced computer programmer trying to weave ChatGPT-sourced code into their coding projects. They don't have the experience or intuition to identify the places where ChatGPT suggests something bad or unwise or even outright nonfunctional, and they're much too new to know how to truly
adapt its suggestions so that they truly meld with the rest of the program. Instead, even if it's being used merely as one tool among many, the novice programmer is likely to learn bad habits (something which
scientific research has already shown is a potential problem!), and to produce output that might be technically functional, but which is really a mess of franken-code sutured together with little pieces of the coder's own personal work.
It's sort of like how we ask students to write five-paragraph essays about their summer vacation or about why some particular policy is a good choice or whatever, when such work is pretty clearly just grunt-stuff, and few (if any) students will have to do
that specific task in real-world situations. The problem with analyzing it that way and saying "ah, then we should just replace five-paragraph-essay writing with ChatGPT prompt-filling!" is that the point of the exercise isn't to learn how to write five-paragraph essays, it's to build the foundational experience and intuition, the baseline of how to communicate cogently and persuasively, so that they can
then go on to do the much harder task of writing their own informative, persuasive, or analytic work.
ChatGPT has the power to simplify and streamline a number of activities we depend on, that tend to involve a lot of laborious human effort that
could be better spent elsewhere. Unfortunately, just like the Internet and television and radio and writing, it also has the
risk of doing something bad and harmful to us if we allow it to. Just as writing preserved ideas from past generations, allowing us to still remember and engage with great thinkers like Socrates more than two thousand years after they died, machine learning has the possibility of helping us in various ways, from data analysis to automating tedious basic tasks to helping focus resources where they're needed most etc., etc. But,
just as writing really can lead to the problems Socrates feared--people neglecting their memory skills, people treating ideas as dead things that one may marvel at like pinned butterflies rather than doing the
work of engaging with living minds and thus truly living ideas--this kind of AI tech has the risk of people turning off their brains, of trusting the computer output when they absolutely should not do that, of replacing actually important and creative work with an eternal ouroboros of self-repetition.
I'm really trying not to be that man-shakes-fist-at-kids Luddite type here. Because I hate those sorts of arguments! I get that it's important for us to see the value and benefit of new technology. But it's also important for us to see the pitfalls and problems of new technology--and important for us to take steps
in advance to prevent those things from flowering, because once a thing becomes institutional, it becomes extremely difficult to change. Just look at combustion engines and climate change.