It is not 100% your vision. It's as close as AI can get using information it already has in it's database. If it doesn't exist in the database, you will never get "your" vision.
Getting 100% your vision as an illustrator is also very hard, then there's also the customer who's vision you're trying to translate into a working illustration. With generative AI, you can add things to that 'database' without retraining the AI, depending on the model, you can add in actual art, styles, etc. You can add what's missing in the 'database'...
The point of the position against AI is that there are more things in the world than venal and hollow self-interest
But funnily enough, many creators that see their livelihoods threatened speak out against it, it's no wonder that people see it as venal and hollow self-interest...
I would hesitate to call it a skill
It's a skill, just a skill you don't value. In the same way getting higher quality/relevant results with a search engine is a skill, a skill most people don't posses. I work in IT and finding stuff quickly online that is of high relevance and has high accuracy is very valuable when people are paying you a lot of money per hour. Although most people wouldn't value that skill highly either, there are many such 'niche' skills that are valuable in the niche, and not so much outside of it.
You also don't need pnp RPGs or a computer... And many people don't, the point is that some people
want it...
I'll post a reply on a forum by writing it myself, because I want to. I don't want to write room descriptions of hundreds of Undermountain rooms. Nor would I want to draw a similar amount of illustrations. It's not just the work involved, but the time as well. I want certain things on my walls in my living room, and I won't be using generative AI for that (maybe for upscaling). I'm either using large prints of iconic illustrations or making/reproducing it myself.
As for using LLM for posting topics or replies to topics, this happens on Reddit quite a bit. But it generally gets downvoted so hard and quickly by the community, that it is not much of an issue in the Reddits I use. As this forum doesn't want to do community moderation, not only is it 'verboten', even using the Like icons in a sarcastic way is not allowed, all reaction moderation is done by moderators. And even if LLM replies are not allowed, people who want to use them, will use them (and get banned), but imagine the strain on moderators. Imagine folks getting banned due to being accused of using LLMs, that are not actually using LLMs, but are actually so exposed to that style of writing, they are using it in their own writing.
Using tools for detection might work, but teachers using 8+ tools to detect LLM usage get outsmarted by students that also use multiple tool combinations to defeat detection tools... Personally I think mods have better things to do then hunt for LLM replies...