Why does the idea of no Free Will bother some people?

Umbran

Mod Squad
Staff member
Supporter
[MENTION=19675]Dannyalcatraz[/MENTION] - you thought that was jumbo? I'm not done yet!

So if I've got a Shroedinger's Box sitting in the kitchen, and I'm watching TV, and my dog goes into the kitchen and sniffs at the box. Does my dog have Free Will if she comes back to me with a dead cat in her mouth? (I mean that the dog opened the box and extracted the dead cat, not that the dog killed the cat).

After all, my dog has resolved a quantum situation and finalized it to being a live or dead cat?

Maybe it did. Maybe dogs have enough sentience for that. I'm good with that idea. Frankly, I'm good with the idea that the cat is sentient enough too, such that there actually isn't any issue - that's something Schrodinger didn't worry about at the time, to be honest. He wasn't talking about sentience and free will, just about the absurdity of a cat being both dead and alive at the same time.

Or, maybe the dog isn't sentient/free-willed enough. It left the area of your perception - now the dog is in as much an unresolved quantum state as the cat. Maybe the dog+cat doesn't resolve until you *look* into the doorway, and the system falls into a known state.

This way lies an uneasy idea - none of the Universe actually exists as "reality" outside the range of perception of qualified observers.

There's a basic way out of this, which amounts to, "actually, the observer isn't important, the form of interaction is important". We still end up in the same place, though, so bear with me...

Here's the thing: The uncertainty principle doesn't actually seem to mean much for large objects. We notice the effect for very small things, like electrons and atoms, but as the mass of an object gets big, the effect shrinks.

I can go into why that is, but it requires math to fully express. So, for the moment, I'll assume you all trust me on that - for micro-scale objects, the uncertainty principle means large effects. For macro-scale objects, it means very little. So, for things like atoms and electrons, we have large ranges of uncertainty. For things like cats and bowling balls, not so much.

We could consider that in Schrodinger's cat, we aren't considering the interaction between a quantum effect and an observer, but between a quantum effect and a macro-scale object (which just happened to be an observer). Normally, single quantum-scale events mean very little to macro-scale objects. Schrodinger just set up a particular case where a quantum effect was very potent - his original had a radioactive atom in the box, and if it decayed, a mechanism broke a poison vial, killing the cat. So, we needed interaction with a large object to resolve it - Schrodinger's large object just happened to be a human being. But maybe anything macro-scale outside the box would do - say a ball that bounces off the lid, and opens the box.

Thus, maybe any time we have a quantum effect interacting notably with a macro-scale object, we have the macro-scale object able to collapse the quantum probabilities into one reality. This doesn't affect our free will idea one bit. We still get that if the activity of the mind/brain/thought-process has quantum properties, and still have the person (who is macro-scale) collapsing the wave of probability of his or her own mind.
 
Last edited:

log in or register to remove this ad

Dannyalcatraz

Schmoderator
Staff member
Supporter
Schrödinger's Cat does not require a sentient being to open the box, just that the box be opened sufficiently enough for the cat to be perceivable.

IOW, if a tree in the forest falls on a box containing Schrödinger's Cat, it will be dead.
 

Aaron L

Hero
IOW, if a tree in the forest falls on a box containing Schrödinger's Cat, it will be dead.

HA! Rad.

As an aside, even cats and dogs and cows and sheeps and fishies are sentient; sentience is the quality of having senses. That's why I personally prefer the term sapience for human level self-reflective intelligence. I just think it's more precise. But I won't nitpick.

I am absolutely loving this discussion. Philosophy, psychology, and quantum physics. The perfect mix. Where the three meet, is where Weird Science begins! Free will as an effect of our self-reflective mind observing itself and collapsing its own wave function. I love it.

Umbran, with regard to what you said, does that just mean that quantum waveforms automatically collapse once things move from the quantum scale to the macro scale?

Because, like you said, always requiring an outside observer would essentially require sentient minds to exist in order for the universe to exist as anything other than a lot of uncollapsed waveforms and vague probabilities... but if the unobserved universe only existed as uncollapsed waveforms and probabilities instead of a definite reality, how could sentient minds have come about?

Unless one wanted to posit the idea of a God existing as a universal observer to collapse the wavefunction of the universe, which I find problematic.

(Waveform? Wave Function? Am I getting terms mixed up?)

I hope that made sense.
 

Umbran

Mod Squad
Staff member
Supporter
As an aside, even cats and dogs and cows and sheeps and fishies are sentient; sentience is the quality of having senses.

Eh, I don't think that flies. Plants have ways of sensing the universe around them, too, but they aren't generally considered sentient.

Fact is, there's more than one definition of "sentient". Some say it is "having senses". Others would say it is "having subjective sensory impressions". Yet others would say it is "being conscious of having sensory impressions". Seems the jury's still out on a precise meaning. So, I think we'll have to settle on having our own meaning in this context.

That's why I personally prefer the term sapience for human level self-reflective intelligence. I just think it's more precise. But I won't nitpick.

Too late :p That's okay, though. Science discussion generally requires a bit of nitpicking.

But note how "human-level" has not really been part of the discussion yet. You may be being more precise, but we're still being vague, mostly intentionally, I suspect. I, personally, am not sure mentation is like D&D character advancement, with levels one clearly "above" another. That's akin to the old "ladder" view of evolution, which these days seems pretty outmoded.

Umbran, with regard to what you said, does that just mean that quantum waveforms automatically collapse once things move from the quantum scale to the macro scale?

I'm not sure what you mean by "things move from". Individual things rarely start in one scale to the other - an electron is an electron, and it never goes from being quantum scale to macro scale. If you mean, "as our observations move from looking at quantum-scale to macro-scale," then... almost, yes. Surely, macro-scale objects don't usually have discernible quantum nature.

However, there are some macro-scale things that have quantum properties - Superconducting QUantum Interference Devices (SQUIDs - really sensitive magnetic field sensors), Schrodinger's cat, and a few others. In the scenario I just described, the questionable aspects of such item is best described in quantum terms. It is only after it interacts in a relevant way with a macro-scale object, such that it has to resolve for the universe of the macro-scale object to make sense, then it resolves.

The box of Shrodinger's cat is "interacting" with the table it is sitting on, but that's not enough to resolve the state, because the table's universe makes sense so long as the box is closed. It is only when the box opens that the rest of the universe would have issues with this alive/dead cat, so the system resolves.

Because, like you said, always requiring an outside observer would essentially require sentient minds to exist in order for the universe to exist as anything other than a lot of uncollapsed waveforms and vague probabilities... but if the unobserved universe only existed as uncollapsed waveforms and probabilities instead of a definite reality, how could sentient minds have come about?

Yes. In the "observer required" model, there's two basic possibilities:

1) There is some Prime Mover who does the initial observation. Like you, many find this problematic.

2) As the waveform(s) of the Universe evolves, the probability of sentient minds existing in the Universe increases. If the probability of there being sufficient sentience to act as an observer ever reaches 100%, then the Universe as a whole resolves, history and all. Rather like human free will arising by self-observation, the universe kind of observes itself, and there we are! This is a very "anthropic principle" kind of universe.

Many folks *really* don't like the mumbo-jumbo there, which is why the "quantum/macro" interpretation arose.

I do have to make this clear - we are in the realm of interpretations of quantum mechanics, not in the realm of proven science.
 

Aaron L

Hero
You're right about the whole sentient vs sapient thing. I just personally like sapient because of the fuzzy definition of sentient. But it's just personal taste, I'm not going to argue with anyone about it here.


When I said "human level intelligence" I probably really should have said "human-type intelligence." I wasn't trying to imply there were levels or "grades" of intelligence, and definitely know that evolution isn't an ever progressing process reaching toward an ultimate "goal." :) Evolution just adapts creatures to their environment, it doesn't have a set goal of making them "better." I actually have a big problem with that particular trope in a lot of science-fiction... especially when stories have all life "evolving up" toward the goal of becoming "energy beings." (Sorry, Babylon 5. I still love you, but that part was dumb.)


And yes, when I said "things move from" what I meant to say was "as our observations move from looking at quantum-scale to macro-scale." You figured out my poorly worded question. :)


And I understand that this is all the area of philosophy of quantum physics and not hard established science. That's why I like discussing it so much. :) It's Mad Science. :p



I only started thinking about all this now because of the idea that was brought up about brains operating on the quantum level and such. It just got me wondering. Sorry about the tangent.
 

Shrodinger's cat is probably dead. It's been sitting in a box with a lump of radioactive material for 77 years.

If nothing else, it joined the choir invisible out of old age probably no more than 63 years ago.
 

If free will doesn't exist, the illusion of free will may have been an evolutionary advantage - maybe brains work better if they have the capacity to believe in free will build in?
 

Janx

Hero
If free will doesn't exist, the illusion of free will may have been an evolutionary advantage - maybe brains work better if they have the capacity to believe in free will build in?

It's also possible that the "feeling" of having free will is sufficient in defining its presence.

Like Descarte's "I think, therefore I am", that which is unable to express such, does not have free will.

A gram of sodium, in a spoon, held over a swimming pool is unable to resist or express its will on the matter of being dropped into the pool. Once dropped, it has no choice in forming a violent reaction with the water.

Wheras, a puppy, held over a swimming pool, may be calm, may squirm and try to avoid its fate. Once dropped, the puppy may happily swim about, or may struggle to get out of the water.

As such, the amalgamation of neurans and chemical reactions creates a complex matrix that resembles free will that differentiates the entity from a rock, or a snail.

I would then posit, that one day, we may have a manufactured entity that exhibits this 'free will' and that entity may be eligible for the same rights and considerations as other recognized biological entities.

I would suspect that ability to pass a Free Will test would be more complex and imply greater cognitive ability (able to set and achieve its own goals, solve problems), compared to the basic Turing Test ala chat-room personability.

What would such a test look like?

Perhaps: present the testee with 2 vastly different problems and tell them they can choose one of them to solve.

The entity may be exercising free will in deciding which to solve, either by suitability (math is hard, so skip the math problem), or ego (I'll do the hard problem and show off how smart I am). I think there'd need to be more to it than that, almost something subjective, forcing the entity to indicate a non-objective preference that purely algorthmic decision would make. What's your favorite color, for instance.
 

Umbran

Mod Squad
Staff member
Supporter
It's also possible that the "feeling" of having free will is sufficient in defining its presence.

Except that human beings are capable of feelings and perceptions over which they do not have control. So, it is in theory possible for us to "feel" like we have will, but have that feeling be merely one more automatic response, an illusion.

What would such a test look like?

Given current technology, I expect it is not testable.

We are at the point that we can tell that sometimes (scarily often) our decision process goes through what amounts to emotional processing before it ever hits logical processing. That emotional processing is not conscious - it generally produces results that we then attach logical reasons to after the fact. But, that doesn't mean it isn't "free will" - there may still be a personal choice buried in there, rather than what amounts to emotional algorithmic processing.
 

Dannyalcatraz

Schmoderator
Staff member
Supporter
Except that human beings are capable of feelings and perceptions over which they do not have control.

...but which they may, under certain circumstances, learn to control.

Not eliminate, mind you- we know from years of studies that the irrational, emotional limbic system is what engages first, not the rational mind- but control, allowing for considered reaction to iterations, even intensely charged ones.
 
Last edited:

Remove ads

AD6_gamerati_skyscraper

Remove ads

Upcoming Releases

Top