A bit of both I think. Especially when it comes to local installs of AI art tools.
Bing and ChatGPT have had people put in content filters to moderate what they deliver and to ensure results.
This content filters only matters when you prompt for something that would trigger them. They don't refuse to produce an image outright. Their not-so-secret sauce is that they rewrite the prompt you type into a flowery prose the image-making model was trained on, adding a lot of details where there is none. Among those details are often description of clothes.
In a local install, you can get the same results if you actually describe what the person is wearing in your prompt. If you just prompt "a woman wearing plate armour", it will default to what it associates with woman in plate armour, which may be the raw base material. If you describe some elements of clothes, it can be better or worse. I once mentionned a steampunk inventor wearing overalls and the end result was a character... wearing only overalls. Which isn't what I had in mind, but was exactly what I asked for, so I can't blame the AI. If I ask with a more complete description, I tend to get images like what was posted above to illustrate ChatGPT generations.
Over in the Bing and ChatGPT world - they've got very strong filters so you at least won't break into the 'weird' unless that was your goal.
And even if it's your goal, because it labels mild elements as 'weird'. Half the models out there can't make a decent Michaelangelo's David statue.