Except that kinda defeats the point: if I already had an image of a silver dragon the odds are high I wouldn't need another one.
It's things for which there are no example images (e.g. two teams of mostly Dwarves playing hockey, with the teams' uniforms being of specific colours) where this AI art could be fun and-or useful, and I'd like to think it's smart enough to interpret the prompts I type in rather than having to rely on AI to generate those prompts for me.
From what y'all have been saying and displaying, it seems it still has a way to go on interpreting prompts; which is fair enough as this is all still in its relative infancy.
We are light years ahead of where we were when the first DALL-E came out and even DALL-E 2 is almost like cave drawings from what we are doing now. We are already seeing improvements in the data because unless you turn of the data sharing in GPT-4 your prompts and evaluations of the output (that is what the thumbs up and down icons are for) it is learning as more people work with it. The most obvious change in the past few weeks is fingers, it doing a much, much better job with fingers... Eyes are still a work in progress.
BTW, I just saw a post from someone who is using GPT+ like I am and they are rolling out a more integrated version of GPT-4 slowly to us paid users (I have not seen this version yet) where it can read uploaded PDFs and analyze them for you, it will contextually use
Search with Bing or
DALL-E 3 or
Advanced Data Analysis based on what you are asking it to do. She showed an example of uploading a photo and creating an image based upon it and then uploading a different image to add to it...
LINK:
@luokai on Threads
EXAMPLES from her post on Threads:

I think this is going to make GPT+ a killer app that is worth the price of admission. Right now I can enable
Search With Bing or
DALL-E 3 or
Advanced Data Analysis, but not all at the same time. Hopefully I will get this new, improved version in the coming week or two and I can report back on how it is working.
I will say, I am happy with the capability I have now for images and I am just beginning to explore the other features. One I am very interested in trying out is the integrated
VISIONS in default GPT-4 because I am barely tapping its capabilities, for instance I can take a photo of my refrigerator contents with my phone and have visions catalog it and suggest recipes to use what is in there.
I am not sure which beta plugin does it yet, but you can have GPT evaluate your website or WordPress blog and it will help you streamline the design, make it more visible to search with metadata, and create all kinds of improvements that would require a lot of learning or a lot of money to a web dev.