Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Prime_Evil" data-source="post: 9375626" data-attributes="member: 11984"><p>No...it doesn't make a copy. This statement is factually incorrect. Models catalogue information (kinda like metadata) about the material in the training set. It learns the characteristics or patterns present in the training set. In supervised learning, labels are added to help the model classify the patterns it detects. In unsupervised learning, the model is exposed to unstructured data with no external context. Most models include various evaluation models to help it detect patterns. The system examines the training set for patterns representative of each attribute the ML model must predict. With art, this might include things like color palette, medium, artist name, etc. But there is no need to copy the original source material into the data set. There is often a preprocessing step where the raw inferences undergo further normalization or tokenization (to reduce duplication, etc). The statistical inferences about patterns or characteristics are used to train ML models to make predictions. So when you tell a generative AI to give you a picture of a kitten in the style of Van Gough, it is essentially making a speculative prediction about what such a thing might look like based on what it "knows" about the patterns associated with each term in your request. It is mapping the input data in your request against a statistical model to generate diverse outputs. </p><p></p><p>It should be obvious that the source material in the training set is not "copied into" the model because the size of the size difference between the source material and the model. At no point is a copy of the source data ingested into the "final AI". Such raw data is useless to it. </p><p></p><p>There are good reasons to raise concerns about the rise of generative AI and its impact on the creative arts, but ignorance about how the technology works isn't helpful. It sheds more heat than light. I would argue the issue is not that the model contains copies of the original material, but that the statistical predictions made by the ML model are so damned accurate these days they can produce works remarkably similar to material in the training set. </p><p></p><p>Is this copying in the traditional sense? Probably not. Is it plagiarism? Maybe and maybe not. I suppose it depends on whether you believe a map is the same as the territory it represents. If making a map is the same as "stealing" or "copying" the terrain it represents, then sure. The model is effectively a map of the patterns detected in the training set. </p><p></p><p>Does this mean that generative art is entirely OK? No. Its existence raises some real moral and practical issues. But these are NEW issues. We've never had a technology like this before in human history. So we are figuring this out for the first time. I don't want to ignore the genuine concerns of artists. Some of those concerns are real and valid. But misrepresenting how the technology works isn't helping them in the long-term - it will be used by the tech bros to discredit their case.</p></blockquote><p></p>
[QUOTE="Prime_Evil, post: 9375626, member: 11984"] No...it doesn't make a copy. This statement is factually incorrect. Models catalogue information (kinda like metadata) about the material in the training set. It learns the characteristics or patterns present in the training set. In supervised learning, labels are added to help the model classify the patterns it detects. In unsupervised learning, the model is exposed to unstructured data with no external context. Most models include various evaluation models to help it detect patterns. The system examines the training set for patterns representative of each attribute the ML model must predict. With art, this might include things like color palette, medium, artist name, etc. But there is no need to copy the original source material into the data set. There is often a preprocessing step where the raw inferences undergo further normalization or tokenization (to reduce duplication, etc). The statistical inferences about patterns or characteristics are used to train ML models to make predictions. So when you tell a generative AI to give you a picture of a kitten in the style of Van Gough, it is essentially making a speculative prediction about what such a thing might look like based on what it "knows" about the patterns associated with each term in your request. It is mapping the input data in your request against a statistical model to generate diverse outputs. It should be obvious that the source material in the training set is not "copied into" the model because the size of the size difference between the source material and the model. At no point is a copy of the source data ingested into the "final AI". Such raw data is useless to it. There are good reasons to raise concerns about the rise of generative AI and its impact on the creative arts, but ignorance about how the technology works isn't helpful. It sheds more heat than light. I would argue the issue is not that the model contains copies of the original material, but that the statistical predictions made by the ML model are so damned accurate these days they can produce works remarkably similar to material in the training set. Is this copying in the traditional sense? Probably not. Is it plagiarism? Maybe and maybe not. I suppose it depends on whether you believe a map is the same as the territory it represents. If making a map is the same as "stealing" or "copying" the terrain it represents, then sure. The model is effectively a map of the patterns detected in the training set. Does this mean that generative art is entirely OK? No. Its existence raises some real moral and practical issues. But these are NEW issues. We've never had a technology like this before in human history. So we are figuring this out for the first time. I don't want to ignore the genuine concerns of artists. Some of those concerns are real and valid. But misrepresenting how the technology works isn't helping them in the long-term - it will be used by the tech bros to discredit their case. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
Top