Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Million Dollar TTRPG Crowdfunders
Most Anticipated Tabletop RPGs Of The Year
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Geek Talk & Media
Divinity video game from Larian - may use AI trained on own assets and to help development
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Blue" data-source="post: 9837458" data-attributes="member: 20564"><p>Sure, I can agree to this. But let me flip this around. I've used plenty of open-source tools and modules in IT. The creator of the tool has a plan on why they are making it. That does not mean any particular use-case is a good fit. I've seen heavyweight javascript modules embedded into HTML pages for some fairly simple tasks that could have been done with a bit of javascript or likely a lighter module that would load faster and be easier to maintain.</p><p></p><p>Gen AI is a tool. The creators have goals they are persuing in creating it. For those who use it, they also need a plan about why they are using it, as you said. A table saw does not give me any efficiency if I'm making clay figurines. That doesn't mean a table saw is worthless, it means that it has things it does well, but what I do isn't one of them.</p><p></p><p></p><p>Can you define "overall"?</p><p></p><p>To give a real life example, a friend who's a lawyer uses it extensively. He needs to check every reference, but he's still completing a significant chunk of his non-courtroom work in about a third of the time. When he became proficient in using it there was a real concern about billing hours, because they were getting cut down a lot. Luckily he was able to take on more clients and just is more productive every week. Note that this isn't actually increasing his billable hours, it's decreasing the cost to all of his clients.</p><p></p><p>I have multiple friends who use it professionally by choice, I can give plenty of anecdotal examples. There's not a lack of specific places I can show where it raises efficiency.</p><p></p><p>So what do you mean by "overall", and why do you consider that a new class of tool must fit that definition in order to be beneficial?</p><p></p><p></p><p>Well, no. I've had discussions about this with my lawyer friend. Just because you don't see this information doesn't mean people are "studiously avoiding" the issue.</p><p></p><p>And I wouldn't be surprised if those who studiously avoiding it are those who, like the table saw from before, didn't have a plan on both how to use it and what specifically it does that they thought would be an improvement, so are loathe to give those details.</p><p></p><p></p><p>Okay, I need to get to another example. Because it's very easy to use the tool poorly and get the results you are talking about. A different friend uses it in coding. I think Claude, but he's changed several times as whicheve ones are best at coding has updated. I'd need to get permission to post publicly some of the things he's shared with me, but I can give a general gist.</p><p></p><p>The big difference is between using it in an amateur fashion and getting results like you say, and using in in a professional way, understanding and leveraging the strengths and weaknesses of the tool.</p><p></p><p>He uses it like pair programming, a well know and widely used real world technique.</p><p></p><p>Nothing is a single prompt. Everything is a lengthy back and forth, with code snippets tested. He points it at documentation and wikis, he discusses priorities and goals. He has it evaluate various modules and both what they would add and if they are light enough for the benefit they would bring. He uses it in a way that he needs to be skilled in the topic to do.</p><p></p><p>From the beginning he asks for unit tests, and also based on the back-and-forth asks about what corner cases aren't getting tested to add those as well.</p><p></p><p>In conversations he prototypes things, explores avenues to do them including backing out of ones that aren't the best, talking about the technologies used, makes sure maintainability is important, and otherwise brings skilled, professional-level planning and knowledge into it.</p><p></p><p>At the end he has it generate a single in-depth prompt that would do everything the back-and-forth revealed. That he posts into a clean (no tokens from that conversation) window of the LLM as well as into a different LLM (using free cycles) to see what they generate. There's times he's been surprised because it's come up with different approaches and he's gone and evaluated those.</p><p></p><p>He can prototype things in hours instead of days or weeks, determine if a path is worthwhile for the larger project as a whole. He can not only succeed faster in the steps he uses it for, he can determine dead ends and prune them faster.</p><p></p><p></p><p>Oh, absolutely agree with you here. It's criminal (well, unfortunately not really) how many corporations and individuals think it's a panacea. It's real work to use it correctly, and they don't train their staff how to do that, nor do they pick if for the tasks it's stronger in but try to apply it to everything.</p><p></p><p>Though to add a touch, "expected value" is often set by C-level people listening to marketing from those selling it. I've implemented IT solutions that have nothing to do with AI where the "expected value" will be so much lower then the C-level expects because they aren't the man-in-the-trenches who actually does the work and demanded it from on-high instead of consulting with all of us experts that they were already paying. "We'll put it all in the cloud!" was one of those. "Expected value" isn't a strong metric to use, I might even go so far to say that the majority of <u>all</u> projects from corporations large enough to have enterprise-level projects don't end up delivering the expected value. Still more than 5%, not saying that the tool is being used poorly. Just level-setting expectations.</p></blockquote><p></p>
[QUOTE="Blue, post: 9837458, member: 20564"] Sure, I can agree to this. But let me flip this around. I've used plenty of open-source tools and modules in IT. The creator of the tool has a plan on why they are making it. That does not mean any particular use-case is a good fit. I've seen heavyweight javascript modules embedded into HTML pages for some fairly simple tasks that could have been done with a bit of javascript or likely a lighter module that would load faster and be easier to maintain. Gen AI is a tool. The creators have goals they are persuing in creating it. For those who use it, they also need a plan about why they are using it, as you said. A table saw does not give me any efficiency if I'm making clay figurines. That doesn't mean a table saw is worthless, it means that it has things it does well, but what I do isn't one of them. Can you define "overall"? To give a real life example, a friend who's a lawyer uses it extensively. He needs to check every reference, but he's still completing a significant chunk of his non-courtroom work in about a third of the time. When he became proficient in using it there was a real concern about billing hours, because they were getting cut down a lot. Luckily he was able to take on more clients and just is more productive every week. Note that this isn't actually increasing his billable hours, it's decreasing the cost to all of his clients. I have multiple friends who use it professionally by choice, I can give plenty of anecdotal examples. There's not a lack of specific places I can show where it raises efficiency. So what do you mean by "overall", and why do you consider that a new class of tool must fit that definition in order to be beneficial? Well, no. I've had discussions about this with my lawyer friend. Just because you don't see this information doesn't mean people are "studiously avoiding" the issue. And I wouldn't be surprised if those who studiously avoiding it are those who, like the table saw from before, didn't have a plan on both how to use it and what specifically it does that they thought would be an improvement, so are loathe to give those details. Okay, I need to get to another example. Because it's very easy to use the tool poorly and get the results you are talking about. A different friend uses it in coding. I think Claude, but he's changed several times as whicheve ones are best at coding has updated. I'd need to get permission to post publicly some of the things he's shared with me, but I can give a general gist. The big difference is between using it in an amateur fashion and getting results like you say, and using in in a professional way, understanding and leveraging the strengths and weaknesses of the tool. He uses it like pair programming, a well know and widely used real world technique. Nothing is a single prompt. Everything is a lengthy back and forth, with code snippets tested. He points it at documentation and wikis, he discusses priorities and goals. He has it evaluate various modules and both what they would add and if they are light enough for the benefit they would bring. He uses it in a way that he needs to be skilled in the topic to do. From the beginning he asks for unit tests, and also based on the back-and-forth asks about what corner cases aren't getting tested to add those as well. In conversations he prototypes things, explores avenues to do them including backing out of ones that aren't the best, talking about the technologies used, makes sure maintainability is important, and otherwise brings skilled, professional-level planning and knowledge into it. At the end he has it generate a single in-depth prompt that would do everything the back-and-forth revealed. That he posts into a clean (no tokens from that conversation) window of the LLM as well as into a different LLM (using free cycles) to see what they generate. There's times he's been surprised because it's come up with different approaches and he's gone and evaluated those. He can prototype things in hours instead of days or weeks, determine if a path is worthwhile for the larger project as a whole. He can not only succeed faster in the steps he uses it for, he can determine dead ends and prune them faster. Oh, absolutely agree with you here. It's criminal (well, unfortunately not really) how many corporations and individuals think it's a panacea. It's real work to use it correctly, and they don't train their staff how to do that, nor do they pick if for the tasks it's stronger in but try to apply it to everything. Though to add a touch, "expected value" is often set by C-level people listening to marketing from those selling it. I've implemented IT solutions that have nothing to do with AI where the "expected value" will be so much lower then the C-level expects because they aren't the man-in-the-trenches who actually does the work and demanded it from on-high instead of consulting with all of us experts that they were already paying. "We'll put it all in the cloud!" was one of those. "Expected value" isn't a strong metric to use, I might even go so far to say that the majority of [U]all[/U] projects from corporations large enough to have enterprise-level projects don't end up delivering the expected value. Still more than 5%, not saying that the tool is being used poorly. Just level-setting expectations. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Divinity video game from Larian - may use AI trained on own assets and to help development
Top