Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Million Dollar TTRPG Crowdfunders
Most Anticipated Tabletop RPGs Of The Year
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Enchanted Trinkets Complete--a hardcover book containing over 500 magic items for your D&D games!
Community
General Tabletop Discussion
*Geek Talk & Media
Graduate School
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Eolin" data-source="post: 2095076" data-attributes="member: 13266"><p>Whatever we call it today. <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":)" title="Smile :)" data-smilie="1"data-shortname=":)" /></p><p></p><p></p><p></p><p></p><p>Only because I am ignoring probability and simplifying to a point where I don't want my professors to see it. But yeah, basicaly you determine which outcome has the most likely most utility outcomes. How you get there can shift. </p><p></p><p></p><p></p><p>If they knew what they wanted, what're they doing going through a descision procedure?</p><p></p><p>But seriously, yeah, that'd be an error in assigning utility. If you already know that you are going to choose one outcome over the others, then there is little need in going through the formalized descision nexus. Instead, you could just give Mexican food an arbitrarily-high utility ranking on "taste" or some other criterion such that it will necessarily win. </p><p></p><p>In other words, because you have already come to a descision, there is no reason for you to use a descision procedure.</p><p></p><p></p><p></p><p>They represent utility. Which is probably defined in terms of human desire-satisfaction or human happiness or something else that has an intuitive definition. One problem here is that once a term is used in a formalized defintion, it is difficult to define without simply pointing to the formalization, which causes some obvious problems -- such as not always knowing what we're talking about. Descision Theory doesn't define happiness for us, that's left up the individuals. </p><p></p><p></p><p></p><p>That's actually a very good point. This is where Baysean Belief conditionalization comes into play -- which is a fancy way of saying that we should always be able to modify our beliefs (and thus, our actions) when we get new information. If we set this up as a real Baysean Learning system (I know, I'm throwing that word around without defining it -- go look up Bayes Theorem), then all new data would change the probabilities of our various hypotheses. </p><p>And that, in turn, would change which one we decide has the most possible potential for good.</p><p></p><p></p><p></p><p>It helps us make descisions. And no, it doesn't yet do it well. I'm working on that.</p><p></p><p>One problem descision theory is working on, one that a former professor if mine is working on, is that we are inherently pretty stupid creatures. And so where I think he is working on is to develop a methodology of making descisions that we can use in everyday life. Truth be told, I woudn't be altogether suprised if it wound up looking a lot like a virtue-based ethical system in which you are supposed to act in a certain sort of way in order to meximize good so far as you can understand it'll be there.</p><p></p><p>Basically, I think we're going to come full circle in utilitarianism and wind up back with a well-defined and worked out virtue ethics that looks a lot like that of Aristotle.</p><p></p><p>But that last bit is just speculation. For now, it only makes sense to judge what descision you will make based upon how much desire satisfaction it can cause. If we're not basing our descisions on human happiness, then I don't know what we're basing them on. And that's all that descision theory lets us do -- its a methodology for coming to descisions.</p></blockquote><p></p>
[QUOTE="Eolin, post: 2095076, member: 13266"] Whatever we call it today. :) Only because I am ignoring probability and simplifying to a point where I don't want my professors to see it. But yeah, basicaly you determine which outcome has the most likely most utility outcomes. How you get there can shift. If they knew what they wanted, what're they doing going through a descision procedure? But seriously, yeah, that'd be an error in assigning utility. If you already know that you are going to choose one outcome over the others, then there is little need in going through the formalized descision nexus. Instead, you could just give Mexican food an arbitrarily-high utility ranking on "taste" or some other criterion such that it will necessarily win. In other words, because you have already come to a descision, there is no reason for you to use a descision procedure. They represent utility. Which is probably defined in terms of human desire-satisfaction or human happiness or something else that has an intuitive definition. One problem here is that once a term is used in a formalized defintion, it is difficult to define without simply pointing to the formalization, which causes some obvious problems -- such as not always knowing what we're talking about. Descision Theory doesn't define happiness for us, that's left up the individuals. That's actually a very good point. This is where Baysean Belief conditionalization comes into play -- which is a fancy way of saying that we should always be able to modify our beliefs (and thus, our actions) when we get new information. If we set this up as a real Baysean Learning system (I know, I'm throwing that word around without defining it -- go look up Bayes Theorem), then all new data would change the probabilities of our various hypotheses. And that, in turn, would change which one we decide has the most possible potential for good. It helps us make descisions. And no, it doesn't yet do it well. I'm working on that. One problem descision theory is working on, one that a former professor if mine is working on, is that we are inherently pretty stupid creatures. And so where I think he is working on is to develop a methodology of making descisions that we can use in everyday life. Truth be told, I woudn't be altogether suprised if it wound up looking a lot like a virtue-based ethical system in which you are supposed to act in a certain sort of way in order to meximize good so far as you can understand it'll be there. Basically, I think we're going to come full circle in utilitarianism and wind up back with a well-defined and worked out virtue ethics that looks a lot like that of Aristotle. But that last bit is just speculation. For now, it only makes sense to judge what descision you will make based upon how much desire satisfaction it can cause. If we're not basing our descisions on human happiness, then I don't know what we're basing them on. And that's all that descision theory lets us do -- its a methodology for coming to descisions. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Graduate School
Top