Scribe
Legend
No telling what shenanigans they have around Dall-E.
I stopped playing with it some time ago now, but it very clearly knows exactly what IP protected terms, tropes, and characters are.
That isn't coming out of the void.
No telling what shenanigans they have around Dall-E.
the wider community is currently sticking with SDXL and SD1.5, honestly if not for the whole Taylor Swift incident the words "safe" wouldn't be used at all, I do have to wonder if the people who are having issues with that aspect of it knew about Photoshop during the early to mid 00s and the plethora or fake websites/images/groups even before reddit existed.Yeah that announcement wasn't great.
But I am pretty sure a community finetune will greatly improve SD3's base model to render, erm, airbags. The community is extremely safety-conscious, after all.
Their coverage is spotty at best, though. And made harder by things that are both protected IPs and public domain words, phrases, etc. If you put in "Marvel's Thor" it will block you, but if you drop the "Marvel" and keep "Thor" it will give you images of Marvel's Thor, because the training data was dominated by art of that character rather than the generic Norse God. You can't get it to create an image of "The Incredible Hulk" it will block you, but if you describe the character it will give you images of the Incredible Hulk. You can also do really basic workarounds by having ChatGPT describe a character or an artist's style, edit that into something that fits the character limit of the prompt, and have DALL-E spit out whatever you want...that's not generally out of bounds like gore, dead bodies, and porn, etc.I stopped playing with it some time ago now, but it very clearly knows exactly what IP protected terms, tropes, and characters are.
That isn't coming out of the void.
Photoshopped stuff is generally far easier to spot, because the vast majority of the people who made fakes had almost zero skill with it. I never bothered to develop any skill with it because as a photographer I'm largely a documentarian, not an artist. noise reduction, levels, and saturation are usually the only things I mess with. Occasionally some skin smoothing or blemish removal, depending upon the use for the image.the wider community is currently sticking with SDXL and SD1.5, honestly if not for the whole Taylor Swift incident the words "safe" wouldn't be used at all, I do have to wonder if the people who are having issues with that aspect of it knew about Photoshop during the early to mid 00s and the plethora or fake websites/images/groups even before reddit existed.
EDIT: I should explain, the reasoning form my understanding is that the so called "safety" that they put in, makes it almost impossible to do normal humans even clothed.
Did you miss the whole AI considers the water mark part of the picture? btw i'm still waiting for you to respond in PM with your replacement if you even have one.Someone's conveniently forgetting how early "AI" generated art mysteriously included watermarks from various sites like DeviantArt and ShutterStock. Whoops.
the wider community is currently sticking with SDXL and SD1.5,
EDIT: I should explain, the reasoning form my understanding is that the so called "safety" that they put in, makes it almost impossible to do normal humans even clothed.
Their coverage is spotty at best, though. And made harder by things that are both protected IPs and public domain words, phrases, etc. If you put in "Marvel's Thor" it will block you, but if you drop the "Marvel" and keep "Thor" it will give you images of Marvel's Thor, because the training data was dominated by art of that character rather than the generic Norse God. You can't get it to create an image of "The Incredible Hulk" it will block you, but if you describe the character it will give you images of the Incredible Hulk. You can also do really basic workarounds by having ChatGPT describe a character or an artist's style, edit that into something that fits the character limit of the prompt, and have DALL-E spit out whatever you want...that's not generally out of bounds like gore, dead bodies, and porn, etc.
More like they trained the program on whatever they could get their hands on and have since put in filters to prevent the program from generating protected content. I know this because a few weeks ago I could get it to generate an image with the words "Marvel's Incredible Hulk" no problem. At some point about a week ago, that was blocked. So too with many others. Though, as I said, their filter is spotty. You can't ask it to generate art similar to Tim Burton specifically, but you can describe Tim Burton's art and get a close enough approximation.Right, they tried to curate or filter, or block, but quite clearly what it was trained on, is what would be expected by the public.