Where an AI may be worse than a random generator is in compliance to rules. AIs are good at approximations, but bad at producing exact numbers. So if you ask an AI to sta a monster, it will pull numbers out of thin air to produce something that resembles what people say when they talk about this type of monster, including Type, CR, attacks and so on. This works pretty well with 5Es freeform monster generation. It works much less well in a more bounded system like the point-buy in Champions.
This is a very important point. I've used ChatGPT for some game prep and it cannot consistently follow the rules of games it has access to, even with further definition from me over many prompts and exercises. By trying to pull from so many different sources -- many of which are pretty good sources, btw -- AI utterly fails at the "art" of building stats, and often (but not all the time) fails even at the "science" portion of it.
Applicable to random generators, as far as just generating content, it's pretty okay, but often repetitive. So if you're just looking for content like lists of stuff, or specific attributes/traits (in the non-mechanical sense, such as adjectives, personality, backstory, appearance, sensory info) of a person or place it'll probably serve you well, and can even come up with components of story material. But it'll repeat info, or repeat like info: if you ask for a list of names, it'll often give you names with very similar constructions, such as every D&D NPC name ending with the famous "two words mashed together" construction like Lightborn, Hammerforge, Mistshroud, and stuff like that. Over and over, ad nauseum. And often even after you ask it not to do that.
They'll get better at that part, which is nice. However, they'll still fail miserably at the moral (and perhaps ethical) side of things for a while longer, so I think purpose built random generators are still much, much better on so many levels for the topic of RPGs that you might as well just stick with those, when in doubt.