Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Enchanted Trinkets Complete--a hardcover book containing over 500 magic items for your D&D games!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9834500" data-attributes="member: 42856"><p>There are a lot of ways to express your idea, and you chose to be insulting. I'll not interact with you further as a result.</p><p></p><p></p><p>For the other readers, who might have been confused about my point, I wanted to say that infrastructure built to train model is built and won't disappear, lessening the (still very high, due to maintenance and energy) cost of trying to create an effective model. I wasn't discounting operational costs, of course. For example, one of the problem datacenter owners meet in the US is the power grid and the necessary investment will be one-off (for decades). Once Google has built its planned nuclear power plants to power its datacenters, they won't go poof tomorrow even if Google goes bankrupt. Also, deeply depressed compute price will reflect on the price of training runs, which are already lower than they were a few years back. Deepseek R1 training cost was "only" 5 millions, and Google's Gemini 79 millions. Mistral, as a company, probably never had the means to spend even a single billion and produced a large number of models. Are they leading the market like billions-spending OpenAI by training models on a mere 24k GPU when the leaders have much, much more? No, they're trailing slightly behind, but they simply take more time. Even if the financial incentive to build many new concurrent datacenters is deeply lessened, it will probably slow the rate of model production, but not make new labs unable to enter the field.</p><p></p><p>And while one may question the financial sense of building further models and bearing those costs, most of the research and high quality models are provided by universities (as [USER=23751]@Maxperson[/USER] put it, China will do it) without a clear monetization goal. I'm also sure a few sovereign countries could be interested in a LLM listening to every phone calls and emails and reporting citizens with uncompliant thoughts, even if there is no money to be made with that. Unfortunately, I am also convinced that those same actors will not be put off by the occasional upright citizen being attributed hallucinated faults.</p><p></p><p></p><p>Which has no bearing on the cost to <em>run</em> the model once it is working. And the article we're speaking of postulated that we're <em>already there </em>for the use case he mentionned. So no more investment needed anyway.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9834500, member: 42856"] There are a lot of ways to express your idea, and you chose to be insulting. I'll not interact with you further as a result. For the other readers, who might have been confused about my point, I wanted to say that infrastructure built to train model is built and won't disappear, lessening the (still very high, due to maintenance and energy) cost of trying to create an effective model. I wasn't discounting operational costs, of course. For example, one of the problem datacenter owners meet in the US is the power grid and the necessary investment will be one-off (for decades). Once Google has built its planned nuclear power plants to power its datacenters, they won't go poof tomorrow even if Google goes bankrupt. Also, deeply depressed compute price will reflect on the price of training runs, which are already lower than they were a few years back. Deepseek R1 training cost was "only" 5 millions, and Google's Gemini 79 millions. Mistral, as a company, probably never had the means to spend even a single billion and produced a large number of models. Are they leading the market like billions-spending OpenAI by training models on a mere 24k GPU when the leaders have much, much more? No, they're trailing slightly behind, but they simply take more time. Even if the financial incentive to build many new concurrent datacenters is deeply lessened, it will probably slow the rate of model production, but not make new labs unable to enter the field. And while one may question the financial sense of building further models and bearing those costs, most of the research and high quality models are provided by universities (as [USER=23751]@Maxperson[/USER] put it, China will do it) without a clear monetization goal. I'm also sure a few sovereign countries could be interested in a LLM listening to every phone calls and emails and reporting citizens with uncompliant thoughts, even if there is no money to be made with that. Unfortunately, I am also convinced that those same actors will not be put off by the occasional upright citizen being attributed hallucinated faults. Which has no bearing on the cost to [I]run[/I] the model once it is working. And the article we're speaking of postulated that we're [I]already there [/I]for the use case he mentionned. So no more investment needed anyway. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top