So yesterday I wanted to make a pic for a token for an upcoming one-shot.
I fed the same prompt into ChatGPT, Microsoft Designer, and a local copy of Stable Diffusion using the 'DynaVision XL' SDXL model checkpoint (essentially the base training model).
Got three extremely different results:
ChatGPT:
View attachment 403467
Microsoft Designer:
View attachment 403468
Local Stable Diffusion:
View attachment 403469
"A vibrant 1970s fantasy style RPG illustration of a freckled dark brown-skinned dwarf woman, age 29, in a fullbody shot. She wears a platemail breastplate, and her black wavy hair is tied into a ponytail, adding to her fierce and majestic appearance. She holds a giant fantasy warhammer in both hands, standing confidently in a bustling underground dwarven cavern fortress. The cobblestone streets are lined with lively market stalls, filled with various goods from weapons to food. The underground town is filled with dwarves in medieval merchant attire, engaging in various activities, creating a lively and dynamic atmosphere. The lighting is moody, casting a deep tone over the scene. The dwarves are seen bartering and socializing, with the overall atmosphere being one of camaraderie and industriousness. The camera captures her full body."
I've been trying to get the style seen in Microsoft Designer within my local Stable Diffusion - and thus far I've yet to 'find' the right checkpoint model to be similar to theirs. They use DallE 3 which I suspect Stable Diffusion can't mimic.
Microsoft Designer locks you to some 15 images per month, and only a little more if you're paid - and lacks the editing tools of stable diffusion. That last image from Stable Diffusion went through my hand drawing a new hammer in Gimp (open source alternative to photoshop) and some other edits, and multiple rounds of upscaling and going back and forth between hand drawn items from myself and the AI tool.
ChatGPT seems to really want to do Ghibli style, and the style it gave here is what I get if I don't do ghibli. It also sat on it for an hour in 'thinking' mode. But at least I could download things, edit them in Gimp or Paint or Illustrator, and then hand them back to ChatGPT - but I've found it has a very short limit per day also. Stable Diffusion is running on my own machine in offline, so I can make hundreds of edits a day back and forth between it and my own hand drawing tools.