Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Blue" data-source="post: 9374331" data-attributes="member: 20564"><p>Earlier in this thread we got into a discussion of we don't know exactly how the brain does it, but we know how it does not. So being able to explain how the brain does it isn't a requirement to being able to discuss how AI does it in a way the brain does not.</p><p></p><p>But all of that said, let me give a try of the <em>process </em>using human vs. AI art.</p><p></p><p>A human art capable of producing a realistic image similar to what AI arts does envisions the various objects which exist in a 3D world, establishes a point of view from which to render it. They translate from 3D to 2D, including what can been seen and what blocks view, where light sources are, perspective and foreshortening, etc. They start with the real world and then from that move to how does it look in an image. Heck, we've had had 3D render programs for a long time that are made to emulate that process, and it's what 3D games do.</p><p></p><p>AI art is models generating it via statistical analysis. It does not involve that process at all, since there was never a 3D model to translate. A human artist could redo the exact same scene but drawn a few degrees to the left and a foot forward. An AI model can't.</p><p></p><p></p><p>"Writes better."</p><p></p><p>One common issue with LLMs (Large Language Models - AI writing) is what they are now calling "hallucinations". I'm not fond of that as a descriptor but it's in common usage. If they have information, they can use the information. If they don't have the information, they often will make up information. Not so different from human - except that they can't tell they made up the information. They can sprinkle falsehoods and incorrect information in, and don't know.</p><p></p><p>An example of this was with ChatGPT-3.5, we were playing a new boardgame, Dice Theme Park, and asked it for strategies. There were whole sections about Mascots and such that just didn't exist in the game, but were presented with the confidence of everything else.</p><p></p><p>A human writer would know when they are bulling around. But there is no "they" to understand this with LLMs. We anthropomorphize them because it seems like someone talking to us, and because we as humans anthropomorphize lots of things. Pets. Cars. Computers. What have you.</p><p></p><p>Instead it's taking the current and previous prompts and statistically generating words. It's spicy autocorrect. Yes, it's the Porsche of conversation compared to the Horse-and-Buggy of conversation of autocorrect, but being more advanced just means it's better at it's job, that it picks the right words, not that it's actually thinking about the concepts.</p><p></p><p>Generating output from input that looks human - yes. Is generated by the same process - not at all.</p><p></p><p>Frankly, it's the anthropomorphism that's a big part of the perception issue. Because people treat it like a human, they mistakenly compare it to how a human would learn.</p></blockquote><p></p>
[QUOTE="Blue, post: 9374331, member: 20564"] Earlier in this thread we got into a discussion of we don't know exactly how the brain does it, but we know how it does not. So being able to explain how the brain does it isn't a requirement to being able to discuss how AI does it in a way the brain does not. But all of that said, let me give a try of the [I]process [/I]using human vs. AI art. A human art capable of producing a realistic image similar to what AI arts does envisions the various objects which exist in a 3D world, establishes a point of view from which to render it. They translate from 3D to 2D, including what can been seen and what blocks view, where light sources are, perspective and foreshortening, etc. They start with the real world and then from that move to how does it look in an image. Heck, we've had had 3D render programs for a long time that are made to emulate that process, and it's what 3D games do. AI art is models generating it via statistical analysis. It does not involve that process at all, since there was never a 3D model to translate. A human artist could redo the exact same scene but drawn a few degrees to the left and a foot forward. An AI model can't. "Writes better." One common issue with LLMs (Large Language Models - AI writing) is what they are now calling "hallucinations". I'm not fond of that as a descriptor but it's in common usage. If they have information, they can use the information. If they don't have the information, they often will make up information. Not so different from human - except that they can't tell they made up the information. They can sprinkle falsehoods and incorrect information in, and don't know. An example of this was with ChatGPT-3.5, we were playing a new boardgame, Dice Theme Park, and asked it for strategies. There were whole sections about Mascots and such that just didn't exist in the game, but were presented with the confidence of everything else. A human writer would know when they are bulling around. But there is no "they" to understand this with LLMs. We anthropomorphize them because it seems like someone talking to us, and because we as humans anthropomorphize lots of things. Pets. Cars. Computers. What have you. Instead it's taking the current and previous prompts and statistically generating words. It's spicy autocorrect. Yes, it's the Porsche of conversation compared to the Horse-and-Buggy of conversation of autocorrect, but being more advanced just means it's better at it's job, that it picks the right words, not that it's actually thinking about the concepts. Generating output from input that looks human - yes. Is generated by the same process - not at all. Frankly, it's the anthropomorphism that's a big part of the perception issue. Because people treat it like a human, they mistakenly compare it to how a human would learn. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
Top