How did you get the AI to regenerate just the face? Or is this copypasting via Photoshop?
I used Stable Diffusion AI. It has a mode called "inpainting" where you just select an area, and it adds a quantity of noise and recreates a new image from the noise. The more noise, the more different result you get (in these images, the last one was the one with the least amount of noise. Sometimes it doesn't "blend" with the rest of the image, but for small modifications like making a better face or removing an out-of-place object (like a jukebox in a medieval tavern) it is quite good.
I "feel" that Dall-E is great at generating something that respect your prompt, more than MJ and then Stable Diffusion, while the image model is best with SD, then Dalle-E, then MJ (but MJ is the oldest and a new version will certainly be on top again). There is no scientific basis in my analysis, just a feeling. So the best outcome is often by generating a "quick and dirty" image with Dall-E and correct the little details with SD, so far.