Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Sarah Silverman leads class-action lawsuit against ChatGPT creator
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="RareBreed" data-source="post: 9089117" data-attributes="member: 6945590"><p>EDIT: I found the quote I was trying to reply to <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":)" title="Smile :)" data-smilie="1"data-shortname=":)" /></p><p></p><p></p><p></p><p>That's at best, a gross over simplification, and at worse, flat out wrong. Truth is, we don't exactly understand how LLM's work, and that makes them all the more frightening. These Large Language Models don't just rearrange words and regurgitate content. Not only are the computer scientists not even really sure how LLM's are capable of doing what they are doing, some even question if LLM's "understand" how they are doing what they are doing.</p><p></p><p>So I am with you in the sense that we need to put a hold on AI, but for a totally different reason that I will explain later. I don't think generative AI with RNNs (Recurrent Neural Networks) or NLP (Natural Language Processing) through Transformer algorithms like BERT, LLama or GPT are just plagiarizers. I do believe they "learn". Is it stealing for a human to study the works of the masters when learning how to paint? We humans learn by watching and studying others. Our styles are also imprinted upon by those that we have an affinity for. Are we all plagiarizers too?</p><p>If the argument is, "they shouldn't have taken the data without the creator's consent", that's a bit more hairy...but even then, it's not any different than what humans do. Can you stop me from studying Van Gogh, or Rembrandt to learn how to paint? Or listening to Jimi Hendrix how to play a guitar? Or imitate the dance moves of Michael Jackson?</p><p></p><p>These LLMs and Generative AI are doing the same: learning. What makes them dangerous, is that we don't know how they are doing what they are doing, the biases from the data they were trained on, and how realistic what they produce is, to the point that it can affect society (ie, think deep fake news). Jobs have always been under threat by technology. This is just the first time in history that the creatives and knowledge workers, and not just the blue collar types have been affected.</p><p></p><p>About 4 months ago, a <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" target="_blank">letter and petition</a> was put out to have a moratorium on new LLM training and research. Last I remember, it had over 12k signatories, some of them luminaries in data science, philosophy and physics (one I recall sticking out was Max Tegmark). If you read it, the concern was that these LLM's are showing a lot of <em>emergent behavior</em> that can't really be explained. If any computer scientist tells you "LLM aren't not intelligent", they are full of it. We don't know how <strong>our</strong> intelligence works, so how can they make the preposterous claim that these LLM's haven't achieved some kind of early AGI (Artificial General Intelligence)?</p><p></p><p>A hot area of research in Machine Learning is called <a href="https://neptune.ai/blog/explainability-auditability-ml-definitions-techniques-tools" target="_blank">explainability</a>. Data scientists are scratching their heads <em>how</em> some of these models work. In many ways, data science is a return to good old fashioned empirical science. Just run experiments, observe the results, then try to come up with a hypothesis to explain how what happened, happened. Most science today is, you have a hypothesis, then you come up with an experiment to test it, record the results and compare to your hypothesis. This is the other way around. You start with data, and try to learn what the "rules" are by testing out various statistical configurations (the models or layers in deep learning).</p><p></p><p>In classic programming: rules + data => answers</p><p>In machine learning: data + answers => rules</p><p></p><p>What machine learning is doing, is figuring out "the rules" for how something works. To simplify it as plagiarism or regurgitation is not what it is doing. It's figuring patterns and relationships, and yes, what is the next most likely word (though much much more complicated than simple Markov Chains). Some of the tasks that GPT-4 have been given are truly amazing to me, and lit a fire under my ass that I needed to learn how this stuff works or I am going to be out of a job in the next 10 years.</p></blockquote><p></p>
[QUOTE="RareBreed, post: 9089117, member: 6945590"] EDIT: I found the quote I was trying to reply to :) That's at best, a gross over simplification, and at worse, flat out wrong. Truth is, we don't exactly understand how LLM's work, and that makes them all the more frightening. These Large Language Models don't just rearrange words and regurgitate content. Not only are the computer scientists not even really sure how LLM's are capable of doing what they are doing, some even question if LLM's "understand" how they are doing what they are doing. So I am with you in the sense that we need to put a hold on AI, but for a totally different reason that I will explain later. I don't think generative AI with RNNs (Recurrent Neural Networks) or NLP (Natural Language Processing) through Transformer algorithms like BERT, LLama or GPT are just plagiarizers. I do believe they "learn". Is it stealing for a human to study the works of the masters when learning how to paint? We humans learn by watching and studying others. Our styles are also imprinted upon by those that we have an affinity for. Are we all plagiarizers too? If the argument is, "they shouldn't have taken the data without the creator's consent", that's a bit more hairy...but even then, it's not any different than what humans do. Can you stop me from studying Van Gogh, or Rembrandt to learn how to paint? Or listening to Jimi Hendrix how to play a guitar? Or imitate the dance moves of Michael Jackson? These LLMs and Generative AI are doing the same: learning. What makes them dangerous, is that we don't know how they are doing what they are doing, the biases from the data they were trained on, and how realistic what they produce is, to the point that it can affect society (ie, think deep fake news). Jobs have always been under threat by technology. This is just the first time in history that the creatives and knowledge workers, and not just the blue collar types have been affected. About 4 months ago, a [URL='https://futureoflife.org/open-letter/pause-giant-ai-experiments/']letter and petition[/URL] was put out to have a moratorium on new LLM training and research. Last I remember, it had over 12k signatories, some of them luminaries in data science, philosophy and physics (one I recall sticking out was Max Tegmark). If you read it, the concern was that these LLM's are showing a lot of [I]emergent behavior[/I] that can't really be explained. If any computer scientist tells you "LLM aren't not intelligent", they are full of it. We don't know how [B]our[/B] intelligence works, so how can they make the preposterous claim that these LLM's haven't achieved some kind of early AGI (Artificial General Intelligence)? A hot area of research in Machine Learning is called [URL='https://neptune.ai/blog/explainability-auditability-ml-definitions-techniques-tools']explainability[/URL]. Data scientists are scratching their heads [I]how[/I] some of these models work. In many ways, data science is a return to good old fashioned empirical science. Just run experiments, observe the results, then try to come up with a hypothesis to explain how what happened, happened. Most science today is, you have a hypothesis, then you come up with an experiment to test it, record the results and compare to your hypothesis. This is the other way around. You start with data, and try to learn what the "rules" are by testing out various statistical configurations (the models or layers in deep learning). In classic programming: rules + data => answers In machine learning: data + answers => rules What machine learning is doing, is figuring out "the rules" for how something works. To simplify it as plagiarism or regurgitation is not what it is doing. It's figuring patterns and relationships, and yes, what is the next most likely word (though much much more complicated than simple Markov Chains). Some of the tasks that GPT-4 have been given are truly amazing to me, and lit a fire under my ass that I needed to learn how this stuff works or I am going to be out of a job in the next 10 years. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Sarah Silverman leads class-action lawsuit against ChatGPT creator
Top