Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*TTRPGs General
What's the difference between AI and a random generator?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Epic Meepo" data-source="post: 9273860" data-attributes="member: 57073"><p>When talking about the internal logic of an AI, I wouldn't say "the code is only half the equation." I would say the code <em>is</em> the internal logic of the AI. Full stop. The fact that there are millions of weights being automatically adjusted doesn't make the logic any more or less complex than the code which sets the values of those weights. The weights are just numerical variables, and their values are not relevant to any of the internal logic involved.</p><p></p><p>Incidentally, you don't need to debug a program to determine how it arrives at a given end state. You just need it to create a log of the individual steps it's taking. Creating that kind of log can be useful during the debugging process, but you don't have to be engaged in debugging to create that kind of log. The same process can be used for a perfectly-understood program with no bugs whatsoever.</p><p></p><p>You're asking two separate questions, so I'll address them in order:</p><p></p><p>Q. How easy would it be to predict that an AI would create the indicated picture?</p><p></p><p>A. It would be time-consuming for a human to predict the indicated picture, the same way it would be time-consuming for a human to predict the position of the Earth at a given time in a computer simulation of the solar system. That's not due to a lack of understanding of the system, nor is it necessarily because the system is hard to understand. That's just because computers process large numbers of explicit variables faster than human brains.</p><p></p><p>Predicting what output an AI gives for a given input is like trying to watch a bullet as it flies through the air. To observe a bullet in flight, you have to film the bullet, then slow the film down to a speed the human brain can process and use that to draw a map. To predict what output an AI gives for a given input, you have to log the steps the AI is taking, then read those steps back at a pace a human brain can process.</p><p></p><p>In the case of a sophisticated generative AI, the actual logic involved in a single use case isn't that complicated (for highly-trained computer scientists specialized in AI). The real challenge is simply tracking all the variables in a human-readable way. That's because the prompt you entered isn't the input the AI used to generate that picture. The input used is the prompt you entered, plus the entire training set used, plus every prior recorded interaction with that training set, plus (if the AI incorporates one or more random generators) whatever seed or seeds were used in the AI's RNGs.</p><p></p><p>Q. How hard would it be to fix the highlighted parts of the picture?</p><p></p><p>A. That depends upon the functionality programmed into the AI which generated the picture. In theory, a sufficiently-advanced AI will incorporate tools users can use to iterate their images with specified changes, allowing them to adjust for gaps or biases in the dataset. Failing that, one can improve output from an AI which has a proven ability to interpret and respond to prompts by simply using better training data.</p><p></p><p>In the case of the picture under discussion, there appears to be a gap in the AI's training set. The AI doesn't have enough metadata specific to penguins but not seals. This could be fixed on the back end by improving the training set to include more penguin and seal metadata, or it could be done by allowing users to select and combine elements from images until a desired result is achieved. Implementing either of these solutions requires a fair amount of programming knowledge and a lot of time, but the actual logic involved is fairly straightforward.</p><p></p><p>If your claim is that it's simply impractical to determine every AI-generated output in advance, then I would agree with you. The whole reason someone programmed AI was so users could rely on AI algorithms to do millions of steps they would otherwise have to do to produce complex output they aren't trained to produce manually.</p></blockquote><p></p>
[QUOTE="Epic Meepo, post: 9273860, member: 57073"] When talking about the internal logic of an AI, I wouldn't say "the code is only half the equation." I would say the code [I]is[/I] the internal logic of the AI. Full stop. The fact that there are millions of weights being automatically adjusted doesn't make the logic any more or less complex than the code which sets the values of those weights. The weights are just numerical variables, and their values are not relevant to any of the internal logic involved. Incidentally, you don't need to debug a program to determine how it arrives at a given end state. You just need it to create a log of the individual steps it's taking. Creating that kind of log can be useful during the debugging process, but you don't have to be engaged in debugging to create that kind of log. The same process can be used for a perfectly-understood program with no bugs whatsoever. You're asking two separate questions, so I'll address them in order: Q. How easy would it be to predict that an AI would create the indicated picture? A. It would be time-consuming for a human to predict the indicated picture, the same way it would be time-consuming for a human to predict the position of the Earth at a given time in a computer simulation of the solar system. That's not due to a lack of understanding of the system, nor is it necessarily because the system is hard to understand. That's just because computers process large numbers of explicit variables faster than human brains. Predicting what output an AI gives for a given input is like trying to watch a bullet as it flies through the air. To observe a bullet in flight, you have to film the bullet, then slow the film down to a speed the human brain can process and use that to draw a map. To predict what output an AI gives for a given input, you have to log the steps the AI is taking, then read those steps back at a pace a human brain can process. In the case of a sophisticated generative AI, the actual logic involved in a single use case isn't that complicated (for highly-trained computer scientists specialized in AI). The real challenge is simply tracking all the variables in a human-readable way. That's because the prompt you entered isn't the input the AI used to generate that picture. The input used is the prompt you entered, plus the entire training set used, plus every prior recorded interaction with that training set, plus (if the AI incorporates one or more random generators) whatever seed or seeds were used in the AI's RNGs. Q. How hard would it be to fix the highlighted parts of the picture? A. That depends upon the functionality programmed into the AI which generated the picture. In theory, a sufficiently-advanced AI will incorporate tools users can use to iterate their images with specified changes, allowing them to adjust for gaps or biases in the dataset. Failing that, one can improve output from an AI which has a proven ability to interpret and respond to prompts by simply using better training data. In the case of the picture under discussion, there appears to be a gap in the AI's training set. The AI doesn't have enough metadata specific to penguins but not seals. This could be fixed on the back end by improving the training set to include more penguin and seal metadata, or it could be done by allowing users to select and combine elements from images until a desired result is achieved. Implementing either of these solutions requires a fair amount of programming knowledge and a lot of time, but the actual logic involved is fairly straightforward. If your claim is that it's simply impractical to determine every AI-generated output in advance, then I would agree with you. The whole reason someone programmed AI was so users could rely on AI algorithms to do millions of steps they would otherwise have to do to produce complex output they aren't trained to produce manually. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
What's the difference between AI and a random generator?
Top