Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Clint_L" data-source="post: 9374616" data-attributes="member: 7035894"><p>A human brain does not perceive the real world at all, and never can. It only ever has access to electro-chemical signals (data), which it then assembles into an interface that allows us to successfully survive and reproduce. Eons of evolution have created this interface, the purpose of which is not to reveal the "real world", whatever that is, but enhance reproduction. Whatever reality is, your sensory organs only sample the tiniest slice of it in order to create the human umveldt, which is of course distinct from that of other species.</p><p></p><p>When a human brain is creating an image we don't know exactly what is going on, but we do know that we are not perceiving reality but a presentation of it driven by internal algorithms. We also know that pattern recognition and prediction are integral to the process, which is why, for example, you can never directly perceive your blind spot. Your brain covers it up with a statistical prediction of what "should" occupy it.</p><p></p><p>[MEDIA=youtube]aB_oEknhlW8[/MEDIA]</p><p></p><p>In other words...there's a lot of statistical analysis going on. I don't understand your final point; AI modelling routinely envisages the same scene from different angles and perspectives with an accuracy that crushes anything a human can do. If you mean that this would be a challenge for some current generative AI models, then that might be so; I don't know the current research on that particular aspect as I am more interested in generative AI that works with language.</p><p></p><p>Yes, they are not conscious and have very limited memory (though research is showing that LLMs are finding workarounds to create more de facto memory than they were designed with, which is <em>fascinating</em>).</p><p></p><p>You are assuming a lot, here. For one thing, humans generally don't know when we are BSing. We only know when we are <em>intentionally</em> BSing. In fact, we are BSing (or "hallucinating," in LLM parlance) <em>all the time</em>. Most of what you remember? It never happened, certainly not exactly as you remember it. All of what you perceive? It's a statistical model driven by the imperatives of evolution, not reality.</p><p></p><p>The big difference is that we have evolved a sense of self, an ongoing story of our own consciousness. No one understands precisely why this happened or how it works, but there is tons of research showing that this is an emergent property of human brains and not some sort of magical event (I mean, we know it evolved so presumably it offers significant reproductive advantages, but thus far we can only speculate). LLMs don't have this. As it turns out, you don't need it to be very good at a lot of writing and artistic endeavours that until scant years ago we thought were exclusively human.</p><p></p><p>I'll be honest: whenever someone uses that analogy for LLMs I am tempted to just politely ignore anything else they write. Sure, it's "spicy" autocorrect if you are using the word "spicy" to cover a LOT of heavy lifting. You may as well call human language production spicy autocorrect. Most of what you do in conversation is taking current and previous prompts and statistically generating words. That's most of what we are doing in this interaction.</p><p></p><p>See, this is the issue that keeps coming up. Consciousness. But we don't know exactly what consciousness is or how it connects to how humans produce language, art, etc. As it turns out, you don't need consciousness to produce good, original writing and art. I find that frankly mind-blowing and difficult to accept, but the evidence is right in front of me.</p><p></p><p>I'm looking through the telescope and seeing the moons of Jupiter orbiting. I can't deny it. The former paradigm ain't working anymore. You can make art without consciousness.</p><p></p><p>People keep asserting this. But <em>we don't know</em> the processes that human brains are using. There are obviously some differences in components and approaches, but at a fundamental level there seem to be large similarities as well. And the output is undeniably similar, and not on a superficial level.</p><p></p><p>There is also the question of whether the process really matters. The output is the thing that is affecting careers and livelihoods. Right now, a lot of the discussion is concerned with process because that's what the law can handle, but at an output level, the battle is already over. The toothpaste is not going back in the tube.</p><p></p><p>Frankly, anthropomorphism is a red herring that is typically used to write off different opinions as ignorant. I am looking at outputs, and at ongoing research into the astonishing and often unpredicted capacities of generative AI. I am interested at a personal level but more so at a professional level. There are vast implications for better understanding how humans learn, and what direction education needs to take in the dawning era of generative AI.</p><p></p><p>Edit: for example, here is one question that we are currently wrestling with: why should we continue to teach students how to write essays when LLMs can do it better and much more efficiently? I think there are good reasons for teaching students the fundamental principles of essay writing, as they have to do with persuasive argumentation and can be applicable to a large number of real world endeavours. I also think understanding these structures is useful for developing human cognition.</p><p></p><p>But should we be spending so much time on having the students actually craft essays? Or should we be moving on to having the students guide LLMs through the grunt work, much as math teachers teach students the basics but then allow them to use calculators when it is time for the heavy computation?</p></blockquote><p></p>
[QUOTE="Clint_L, post: 9374616, member: 7035894"] A human brain does not perceive the real world at all, and never can. It only ever has access to electro-chemical signals (data), which it then assembles into an interface that allows us to successfully survive and reproduce. Eons of evolution have created this interface, the purpose of which is not to reveal the "real world", whatever that is, but enhance reproduction. Whatever reality is, your sensory organs only sample the tiniest slice of it in order to create the human umveldt, which is of course distinct from that of other species. When a human brain is creating an image we don't know exactly what is going on, but we do know that we are not perceiving reality but a presentation of it driven by internal algorithms. We also know that pattern recognition and prediction are integral to the process, which is why, for example, you can never directly perceive your blind spot. Your brain covers it up with a statistical prediction of what "should" occupy it. [MEDIA=youtube]aB_oEknhlW8[/MEDIA] In other words...there's a lot of statistical analysis going on. I don't understand your final point; AI modelling routinely envisages the same scene from different angles and perspectives with an accuracy that crushes anything a human can do. If you mean that this would be a challenge for some current generative AI models, then that might be so; I don't know the current research on that particular aspect as I am more interested in generative AI that works with language. Yes, they are not conscious and have very limited memory (though research is showing that LLMs are finding workarounds to create more de facto memory than they were designed with, which is [I]fascinating[/I]). You are assuming a lot, here. For one thing, humans generally don't know when we are BSing. We only know when we are [I]intentionally[/I] BSing. In fact, we are BSing (or "hallucinating," in LLM parlance) [I]all the time[/I]. Most of what you remember? It never happened, certainly not exactly as you remember it. All of what you perceive? It's a statistical model driven by the imperatives of evolution, not reality. The big difference is that we have evolved a sense of self, an ongoing story of our own consciousness. No one understands precisely why this happened or how it works, but there is tons of research showing that this is an emergent property of human brains and not some sort of magical event (I mean, we know it evolved so presumably it offers significant reproductive advantages, but thus far we can only speculate). LLMs don't have this. As it turns out, you don't need it to be very good at a lot of writing and artistic endeavours that until scant years ago we thought were exclusively human. I'll be honest: whenever someone uses that analogy for LLMs I am tempted to just politely ignore anything else they write. Sure, it's "spicy" autocorrect if you are using the word "spicy" to cover a LOT of heavy lifting. You may as well call human language production spicy autocorrect. Most of what you do in conversation is taking current and previous prompts and statistically generating words. That's most of what we are doing in this interaction. See, this is the issue that keeps coming up. Consciousness. But we don't know exactly what consciousness is or how it connects to how humans produce language, art, etc. As it turns out, you don't need consciousness to produce good, original writing and art. I find that frankly mind-blowing and difficult to accept, but the evidence is right in front of me. I'm looking through the telescope and seeing the moons of Jupiter orbiting. I can't deny it. The former paradigm ain't working anymore. You can make art without consciousness. People keep asserting this. But [I]we don't know[/I] the processes that human brains are using. There are obviously some differences in components and approaches, but at a fundamental level there seem to be large similarities as well. And the output is undeniably similar, and not on a superficial level. There is also the question of whether the process really matters. The output is the thing that is affecting careers and livelihoods. Right now, a lot of the discussion is concerned with process because that's what the law can handle, but at an output level, the battle is already over. The toothpaste is not going back in the tube. Frankly, anthropomorphism is a red herring that is typically used to write off different opinions as ignorant. I am looking at outputs, and at ongoing research into the astonishing and often unpredicted capacities of generative AI. I am interested at a personal level but more so at a professional level. There are vast implications for better understanding how humans learn, and what direction education needs to take in the dawning era of generative AI. Edit: for example, here is one question that we are currently wrestling with: why should we continue to teach students how to write essays when LLMs can do it better and much more efficiently? I think there are good reasons for teaching students the fundamental principles of essay writing, as they have to do with persuasive argumentation and can be applicable to a large number of real world endeavours. I also think understanding these structures is useful for developing human cognition. But should we be spending so much time on having the students actually craft essays? Or should we be moving on to having the students guide LLMs through the grunt work, much as math teachers teach students the basics but then allow them to use calculators when it is time for the heavy computation? [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
The AI Red Scare is only harming artists and needs to stop.
Top