Graduate School

Eolin said:
Then let's get into an argument.

You think a axiomatized method of deriving satisfaction is inherently problematic. That's fine. Let me explain how the basics of decision theory works, and we'll see if you still see it the same way. Likely you will. I havn't done this in a while, so my apologies if the probabilities don't make sense.

Let's start with a pretty simple example. I want to each lunch, and I want to spend as little money as possible. And I really like meat and other things that taste good. My choices include anything I could find or make, but let's put it into three basic categories: Prime Rib (good-tasting meat), Mexican Food (meaning all pseudo-cheap eating out categories), and leftovers. Now, as this is a toy example, I can make the actual decision pretty easily. The calculations I'm about to demonstrate are good for something like buying a car, but they'll do here.

Currently decision theory doesn't have a mechanism that I know of for determining what's true. That's what Bayes theorem is for, but it hasn't been integrated yet. And I'm not about to try to do that in an internet post. So, for each of these, all I'm reall going to do is to list the expected utility from each choice, based on whatever areas I can think of right now. Wish I knew how to do tables. The numbers represent utilities. And, of course, this doesn't take into account other peoples happiness and how that affects me -- though it is fairly easy to do so.

Food: Prime Rib Mexican Leftovers
Cost: 1 2 4 (being free is great!)
Taste: 9 7 6
After: 3 4 2 (I like how good food makes me feel.)
Total: 13 13 12

I contrived the example such that Prime Rib and Mexican would win. From here, what descision theory generally calls for is to flip a coin. This being enroll, I'd say roll a die. Say, a d20. With prime Rib being 0-10, and Mexican being 11-20

It isn't that the we actually use the die to decide what to do -- that'd be boring folly. Instead, as the die in spinning, we find out how we want it to come out -- and that's what we decide.

Now, where does using real equations come in? Bayes Theorem, which is a the best way we've got currently of crunching conditional probabilities in order to decide between hypotheses. Its hypotheses-testing at its current best, and for this example our hypotheses might be:

H1: I desire Prime Rib.
H2: I desire Mexican Food.
H3: Leftovers are where its at!

What I want to be able to do with Bayes Theorem is to be able to deduce, using sweet probabilities, which of these is more likely to satisfy my desires. Once I do that, the world is my oyster. Or something.

And that's how we can mathematically derive desire satisfaction.

The above (c) by me. Don't steal from me, I'm just a graduate student.

Sure! Well, let's discuss. I prefer that to argument.

If I understand your description of decision theory correctly, it primarily consists in listing your options on on axis and your important criteria on another axis, then assigning values to criteria for each option. Total up values, and that's what you should decide.

Questions:
1. Does the option with the highest value represent what you should decide or your actual decision? I'm thinking of the person who (in your example) scores prime rib the highest, then says "Screw it, I want Mexican!" Or, under decision theory, did they just make an error in assigning values?
2. What do those numerical values represent? Aren't they arbitrary? Is that a problem?
3. What about criteria that emerge during the course of inquiry? Are you asking to much in having all your relevant criteria for evlaution set from the start? Example: So, you decide on prime rib using your method above. As you pick up your keys to go out the door to drive to the prime rib place, you realize that the prime rib joint is all the way across town and you only have a quarter tank of gas. So, during the course of carrying out your decision, new and relevant criteria have emerged that complicate your choice. Do you now recompute?

I guess these questions boil down to two related questions about decision theory: what does it actually do and does it do that well?

Philosophy is fun. :)
 

log in or register to remove this ad


nakia said:
Sure! Well, let's discuss. I prefer that to argument.

Whatever we call it today. :)


nakia said:
If I understand your description of decision theory correctly, it primarily consists in listing your options on on axis and your important criteria on another axis, then assigning values to criteria for each option. Total up values, and that's what you should decide.

Only because I am ignoring probability and simplifying to a point where I don't want my professors to see it. But yeah, basicaly you determine which outcome has the most likely most utility outcomes. How you get there can shift.

nakia said:
1. Does the option with the highest value represent what you should decide or your actual decision? I'm thinking of the person who (in your example) scores prime rib the highest, then says "Screw it, I want Mexican!" Or, under decision theory, did they just make an error in assigning values?

If they knew what they wanted, what're they doing going through a descision procedure?

But seriously, yeah, that'd be an error in assigning utility. If you already know that you are going to choose one outcome over the others, then there is little need in going through the formalized descision nexus. Instead, you could just give Mexican food an arbitrarily-high utility ranking on "taste" or some other criterion such that it will necessarily win.

In other words, because you have already come to a descision, there is no reason for you to use a descision procedure.

nakia said:
2. What do those numerical values represent? Aren't they arbitrary? Is that a problem?

They represent utility. Which is probably defined in terms of human desire-satisfaction or human happiness or something else that has an intuitive definition. One problem here is that once a term is used in a formalized defintion, it is difficult to define without simply pointing to the formalization, which causes some obvious problems -- such as not always knowing what we're talking about. Descision Theory doesn't define happiness for us, that's left up the individuals.

nakia said:
3. What about criteria that emerge during the course of inquiry? Are you asking to much in having all your relevant criteria for evlaution set from the start? Example: So, you decide on prime rib using your method above. As you pick up your keys to go out the door to drive to the prime rib place, you realize that the prime rib joint is all the way across town and you only have a quarter tank of gas. So, during the course of carrying out your decision, new and relevant criteria have emerged that complicate your choice. Do you now recompute?

That's actually a very good point. This is where Baysean Belief conditionalization comes into play -- which is a fancy way of saying that we should always be able to modify our beliefs (and thus, our actions) when we get new information. If we set this up as a real Baysean Learning system (I know, I'm throwing that word around without defining it -- go look up Bayes Theorem), then all new data would change the probabilities of our various hypotheses.
And that, in turn, would change which one we decide has the most possible potential for good.

nakia said:
I guess these questions boil down to two related questions about decision theory: what does it actually do and does it do that well?

It helps us make descisions. And no, it doesn't yet do it well. I'm working on that.

One problem descision theory is working on, one that a former professor if mine is working on, is that we are inherently pretty stupid creatures. And so where I think he is working on is to develop a methodology of making descisions that we can use in everyday life. Truth be told, I woudn't be altogether suprised if it wound up looking a lot like a virtue-based ethical system in which you are supposed to act in a certain sort of way in order to meximize good so far as you can understand it'll be there.

Basically, I think we're going to come full circle in utilitarianism and wind up back with a well-defined and worked out virtue ethics that looks a lot like that of Aristotle.

But that last bit is just speculation. For now, it only makes sense to judge what descision you will make based upon how much desire satisfaction it can cause. If we're not basing our descisions on human happiness, then I don't know what we're basing them on. And that's all that descision theory lets us do -- its a methodology for coming to descisions.
 

If they knew what they wanted, what're they doing going through a descision procedure?

But seriously, yeah, that'd be an error in assigning utility. If you already know that you are going to choose one outcome over the others, then there is little need in going through the formalized descision nexus. Instead, you could just give Mexican food an arbitrarily-high utility ranking on "taste" or some other criterion such that it will necessarily win.

In other words, because you have already come to a descision, there is no reason for you to use a descision procedure.

Two points here: 1. One point of my example was that they didn't know what they wanted, but somehow the procedure clarified it for them. During the process, they realized they wanted Mexican food, even though they had already assigned higher values to prime rib. It seems to me this happens a lot in real life - maybe we are dishonest with ourselves in assigning values, maybe it's just desire trumping reason -- but we frequently make decisions based on rational processes then ignore those decisions. That's not a problem with decision theory, really, but it seems to be a large part of the human condition that maybe such theory should rekon with.
2. Does decision theory claim any normative force for the outcomes? My guess is that DT would say the outcome of the process has normative weight because it's what you "really want". Since you really want it, you'll do it. If you don't, you didn't really want it. But that seems a little circular.

That's actually a very good point. This is where Baysean Belief conditionalization comes into play -- which is a fancy way of saying that we should always be able to modify our beliefs (and thus, our actions) when we get new information. If we set this up as a real Baysean Learning system (I know, I'm throwing that word around without defining it -- go look up Bayes Theorem), then all new data would change the probabilities of our various hypotheses.
And that, in turn, would change which one we decide has the most possible potential for good.

I'll have to check out Bayes Theorem. I've done a lot of work with pragmatism (Dewey in particular -- see avatar) and judgment. There appear to be a lot of similarities between Bayes (as you explain it) and Dewey.

Anyway, I've got some work to do today and I'm out of town (to a philosophy of education conference, actually), so I can't do much more now. Perhaps when I get back. Congrats again on grad school!
 

Remove ads

Top