Jump to content
Objectivism Online Forum
ctrl y

Rand's argument against determinism

Rate this topic

Recommended Posts

You do not have free will.

However, you will go and listen to this:

The Leonard Peikoff Show Clip - Free Will or Determinism (YouTube recording of a portion of one of Dr. Peikoff's radio show, a discussion on free will versus determinism.)

You will then go and listen to these portions of some of Dr. Peikoff's podcasts:

Episode 16 -- May 26, 2008

Question: Did determinism hold true on this planet until humans obtained their volitional capacity? (Last question; at about 12:40)

Episode 34 -- October 27, 2008

Question: Do our material brains comprise our entire consciousness, and if so, doesn't that mean that there is no free will? (Next to last question; at about: 11:30)

Episode 48 -- February 09, 2009

Question: How did Epicurus reconciled free will with all the atoms that merely react by hitting each other? (About 4:20)

Episode 55 -- March 30, 2009

Question: If there was a being that was so intelligent that it always knew the correct action to take, wouldn't that knowledge be irresistible, determining every action of such a being, effectively negating it's free will? (About 5:20)

After listening to these various clips, you will return here and report on your experience. You have no choice in the matter!

Thanks for the links. I may not have a free choice, that doesn't mean I'm predictable :pirate:

Share this post


Link to post
Share on other sites
If science explained human action in purely physical terms and anything explainable in physical terms was deterministic, then volition would be false,

This is not true. Explanations are not causes. The truth of an explanation comes from the facts, it doesn't cause the facts, or prove determinism.

Share this post


Link to post
Share on other sites
Do you have any support for this claim? Because I highly doubt it. Even if you could build the equipment to accumulate light, register sound waves, identify various chemical compounds in the air, and determine the shape and texture of various surfaces as well as our eyes, ears, noses, mouths, and skin do it would still be just that - a bunch of disjointed data with no consciousness to bring it all together, identify it, conceptualize it, name it and define it. And if "it" can't do that, then it will never be in any position to make any sort of decision.

It's something Dennett said in an interview once. He seems to know the field pretty well, and he said that it's just about feasible, in theory, right now (in a way that it wasn't, say a few decades ago) - if we devoted enough resources to it, made a huge, global, concerted effort. There are sufficient bits and pieces of theory, on all the components of intelligent behaviour, being done by groups all over the world, each of them producing a little piece of conscious behaviour, and some of them are getting close to passing the Turing Test (as chess playing AI has already done) in their respective areas. Some components of intelligent behaviour these people are working on are more concrete (e.g. locomotion) others more abstract (putting sensory data together, conceptualize it, etc.). I think he's right that it's not too much of a stretch to think it is actually possible, but it would be a monumental management task, prohibitively expensive - and more to the point, both scientifically pointless and not needed by anyone. Economically speaking, nobody needs such an expensive piece of hardware.

Better to let it come in the future, as a result of economic progress, when those lesser problems are definitely solved, and things get cheaper, so it's more economically feasible to "bring it all together".

If you're holding this up as an example of how a robot is making decisions, then I disagree. The robot isn't making any decision at all - it's merely following its programming, as you point out. If I tell the computer in my car engine to print out my research paper for tomorrow and it just sits there, doing nothing, is it disobeying my orders? Did it choose to disobey me? When the robot goes against its programming, then it might be said to be making a decision, but without a consciousness there I don't think so.

The idea is that these hypothetical (but increasingly feasible) AIs are self-reprogramming. They can update their goals, just as you or I.

Share this post


Link to post
Share on other sites

You're welcome. I actually hope they help.

Perhaps I'm missing it, but it seems like you have a problem with accepting the self-evident, as though somehow the self-evident is unreliable, as though it is an unreliable foundation of knowledge. But it's the opposite. If the self-evident is to be doubted, then there's no foundation for knowledge.

If I'm right, why is the self-evident such a problem? Why does it seem so unreliable?

It might help to consider just what is in fact self-evident. If you make the mistake of assuming that too much is self-evident, then it will seem unreliable.

[Edited to add: Also perhaps helpful is to realize that all knowledge is awareness of what is, existence. That's all knowledge is. Who's awareness? Yours, mine, each individual. There's sensory awareness, perceptual (object) awareness, and there's conceptual awareness -- which is but a method of grasping, of being aware of, what is.]

Edited by Trebor

Share this post


Link to post
Share on other sites
I know what you mean - but consider: for years people thought playing a game of chess was a standard robots couldn't achieve. Then along came Big Blue and others!

Taking music, I think a robot will soon able to write nursery rhymes and jingles that pass the Turing Test (actually I think there may already be one like that); then rap would be kind of intermediary (it's not as simple as nursery rhymes - it can sometimes be quite sophisticated in terms of rhyming and meaning). Music like Rachmaninov (or Mozart, or ...) would then be the sort of ultimate Turning Test. (And I think such a robot would have to be a robot that wasn't merely purpose-built for that, as chess playing computers were purpose-programmed for chess, it would have to be a proper, self-reprogramming, "living" robot that could do other things than write as well as Rachmaninov. It would have to be such that it might even resent your asking it to write a tune like Rachmaninov, because it felt more in a Mozartish mood that day! :) )

Well, as I said, I'll believe it when I see it.

Since you mentioned moods, though: Would it be possible for this robot to find itself in a mood to, say, murder someone? How would it decide whether or not to act on its mood? If it committed a murder, should its creator be punished for it--or should the robot be punished?

Share this post


Link to post
Share on other sites
I think he's right that it's not too much of a stretch to think it is actually possible,

Actually, it's a huge stretch to think it is actually possible. Not just technically, but philosophically as well. Copying intelligent behavior is not evidence of consciousness. Gathering sensory data is not sensing.

The idea is that these hypothetical (but increasingly feasible) AIs are self-reprogramming. They can update their goals, just as you or I.

They're still following their programming. You're never going to be able to get away from that fact. We are not programmed. I can update my goals and yet still choose to not act toward them, and even act against them. Following orders is not volition.

Share this post


Link to post
Share on other sites
You're welcome. I actually hope they help.

Perhaps I'm missing it, but it seems like you have a problem with accepting the self-evident, as though somehow the self-evident is unreliable, as though it is an unreliable foundation of knowledge. But it's the opposite. If the self-evident is to be doubted, then there's no foundation for knowledge.

If I'm right, why is the self-evident such a problem? Why does it seem so unreliable?

I don't doubt that there is no such thing, that is self-evident. I doubt that free will is self-evident.

I have not been through all of Peikoff's podcasts yet, but so far my issues were not addressed.

Usually the argument, that free will is self evident starts with:

"I observe that I make choices", if you follow that though, you don't have anything self-evident:

"What are choices?"

"Picking between more than one alternative."

"What are alternatives?"

"Different courses of action that I could have followed."

"How can you prove that you could made a different action?"

At this point, the only answer I see is:

"I can't, since the prove would require an impossible experiment." (recreating the same conditions)

Share this post


Link to post
Share on other sites
I don't doubt that there is no such thing, that is self-evident. I doubt that free will is self-evident.

I have not been through all of Peikoff's podcasts yet, but so far my issues were not addressed.

Usually the argument, that free will is self evident starts with:

"I observe that I make choices", if you follow that though, you don't have anything self-evident:

"What are choices?"

"Picking between more than one alternative."

"What are alternatives?"

"Different courses of action that I could have followed."

"How can you prove that you could made a different action?"

At this point, the only answer I see is:

"I can't, since the prove would require an impossible experiment." (recreating the same conditions)

Okay. One issue at a time.

You say, "I don't doubt that there is no such thing, that is self-evident. I doubt that free will is self-evident."

If you will, for clarity sake, please restate that without the double negatives. (I realize that you are saying that you do doubt that free will is self-evident, so you don't have to clear that up. I understand. We can leave free will aside for now.)

Are you saying, "I doubt that there is such a thing as the self-evident." (You do not think that there are any things that are self-evident.)

Or are you saying, "I do not doubt there is such a thing as the self-evident." (You have no doubts, there are indeed things that are self-evident.)

Also, what is the meaning of self-evident?

If it's your view that there are things that are self-evident, what are a few examples of things that are self-evident?

Further, assuming you do say there are things that are self-evident and give a few examples, do those things that you say are self-evident require proof?

Lastly, what is proof?

Share this post


Link to post
Share on other sites

Yeah sorry about that. I think there are self-evident concepts or entities, like existence and consciousness. They can't be refuted without using them, they form the base of our thinking.

They are axioms and therefore can not be proven. Free will is not self-evident or axiomatic in that sense.

Proof is a statement, that was verified with the laws of logic.

Share this post


Link to post
Share on other sites
Yeah sorry about that. I think there are self-evident concepts or entities, like existence and consciousness. They can't be refuted without using them, they form the base of our thinking.

They are axioms and therefore can not be proven. Free will is not self-evident or axiomatic in that sense.

Proof is a statement, that was verified with the laws of logic.

For now, I'd still prefer to leave the issue of free will out. I can't see it helpful to discuss it without first being certain that we agree on more basic issues, specifically (at least) the meaning of "self-evident" and "proof." In other words, it's not going to be helpful to ask whether or not free will is self-evident without understanding what self-evident means.

I find your comment, "there are self-evident concepts or entities, like existence and consciousness" to be confusing, and need to clear it up.

Self-Evident:

When we speak of “direct perception” or “direct awareness,” we mean the perceptual level. Percepts, not sensations, are the given, the self-evident.

Introduction to Objectivist Epistemology, 5.

Do you agree with that statement by Miss Rand?

Do you agree that percepts (perceptions) are self-evident, that via perception we are given direct awareness of existence, of what is, of objects and their attributes, some of them at least, etc.? We don't need proof of what we're given directly; that it is is self-evident. The self-evident is sufficient evidence to us (our self) of itself. Nothing more is needed to be aware of it but simply to be aware of it.

Would you agree that when I, who may have learned many things in my life about apples, perceive an apple -- see it, touch it, smell it, taste it, etc. -- I am given the same self-evident information that a child who is only perceiving the apple for the first time, and no more?

In other words, what's self-evident to me is similarly self-evident to a child, assuming we're both normal, that our senses function as they should, etc.

We are both given direct, self-evident, awareness of the apple and certain qualities, attributes and characteristics of it. We each directly perceive it as an firm object, we perceive its color and even the variations in color, its stem if it's still attached, its taste and texture, it's smell or odor, etc.

All the rest that I know, yet the child does not, beyond the directly perceivable, is learned, conceptual knowledge, not given, not self-evident.

Agreed?

Also, you say that "proof is a statement, that was verified with the laws of logic." That too is confusing to me.

Proof:

“Proof,” in the full sense, is the process of deriving a conclusion step by step from the evidence of the senses, each step being taken in accordance with the laws of logic.

Leonard Peikoff, “Introduction to Logic,” lecture series (1976), Lecture 1.

Do you agree with Dr. Peikoff's definition of "proof"? That proof is a proces of drawing a conclusion on the basis of evidence and logic?

To prove that someone committed a murder, for example, you have to present the evidence that supports a certain conclusion (statement) that the person committed the murder. There has to be evidence of a murder, and there has to be evidence, like motive and opportunity and means, etc., that logically supports the conclusion that the suspect is the one who committed the murder.

Agreed?

Share this post


Link to post
Share on other sites

Concerning self-evidence and axiomatic concepts, "existence," "identity," and "consciousness," etc., they are implicitly self-evident in every perception, even to those who have never conceptualized them.

Dr. Peikoff gave an example (can't remember where) that I found helpful. In essence:

When you see a tomato, you are aware that it IS (Existence), you are aware that IT (Identity) is, and you are AWARE (Consciousness) that it is.

Whether or not a one has conceptualized the axiomatic concepts, one is aware of their referents in every perception, and in that sense they are self-evident. It is self-evident to you that it exists, that it is something, and that you are aware of it. Such are the referents of the axiomatic concepts.

Self-Evident:

Nothing is self-evident except the material of sensory perception.

“Philosophical Detection,” Philosophy: Who Needs It, 13

Share this post


Link to post
Share on other sites
Do you agree with that statement by Miss Rand?

She said direct perception is self-evident. Does she mean that self-evident equals direct perception / awareness?

Its not clear to me how she distinguishes from percepts and sensations.. those two have the same translation in German.

Do you agree that percepts (perceptions) are self-evident, that via perception we are given direct awareness of existence,

Existence: certainly.

of what is, of objects and their attributes, some of them at least, etc.? We don't need proof of what we're given directly; that it is is self-evident. The self-evident is sufficient evidence to us (our self) of itself. Nothing more is needed to be aware of it but simply to be aware of it.

Would you agree that when I, who may have learned many things in my life about apples, perceive an apple -- see it, touch it, smell it, taste it, etc. -- I am given the same self-evident information that a child who is only perceiving the apple for the first time, and no more?

In other words, what's self-evident to me is similarly self-evident to a child, assuming we're both normal, that our senses function as they should, etc.

We are both given direct, self-evident, awareness of the apple and certain qualities, attributes and characteristics of it. We each directly perceive it as an firm object, we perceive its color and even the variations in color, its stem if it's still attached, its taste and texture, it's smell or odor, etc.

All the rest that I know, yet the child does not, beyond the directly perceivable, is learned, conceptual knowledge, not given, not self-evident.

Agreed?

I think we are broadly on the same side here. The example of the child is a scientific one, in my opinion. The feeling of touch, smell, taste and the image we see of that apple are not pure "raw" data, but are interpretations of that date by certain brain-regions.

I am not sure if these specific brain-regions actually adapt over time and change our perceptions. Evidence could be that people who loose their eyesight tend to develop a more acute hearing.

Concerning direct awareness of attributes of objects:

Well, you certainly are aware, that a certain object looks like it has 4 edges and looks like a stone.

Is it therefore self-evident that there is in fact a stone with 4 edges in front of me? Certainly not; it might be a hallucination or a illusion.

The information I got about this object is real (something must have caused me to see what I see) and some of the information about it must be true ("To me, it looks like a stone").

Are the physical attributes of something that I see self-evident? No (illusions, hallucinations).

Also, you say that "proof is a statement, that was verified with the laws of logic." That too is confusing to me.

Proof:

Do you agree with Dr. Peikoff's definition of "proof"? That proof is a proces of drawing a conclusion on the basis of evidence and logic?

To prove that someone committed a murder, for example, you have to present the evidence that supports a certain conclusion (statement) that the person committed the murder. There has to be evidence of a murder, and there has to be evidence, like motive and opportunity and means, etc., that logically supports the conclusion that the suspect is the one who committed the murder.

Agreed?

Not really.. or I'm not sure what he means by "evidence". Is a mathematical axiom considered to be "evidence"? I think the word does not fit there.

I don't think you need the criteria of evidence, because the laws of logic alone tell you, that you need a starting point in form of an axiom and "evidence" is a too narrow term.

There is no "evidence" for a mathematical axiom flying out there in the universe and yet once you found a proof in math (without errors), it will be true forever (with the same axioms).

Share this post


Link to post
Share on other sites
Actually, it's a huge stretch to think it is actually possible. Not just technically, but philosophically as well. Copying intelligent behavior is not evidence of consciousness. Gathering sensory data is not sensing.

Well, the scientists working on these projects don't think so. The consensus has been for a while that they should work "from the ground up", and make simple systems that do simple jobs, and integrate them into a structure. That seems plausible in that it's how we're built too. We'll see.

They're still following their programming. You're never going to be able to get away from that fact. We are not programmed. I can update my goals and yet still choose to not act toward them, and even act against them. Following orders is not volition.

If you choose to not act towards a goal, or act against it, you have simply replaced one goal with another you value more (an it be death). How is that essentially different from this (notional type of) AI re-programming itself and updating its goals? How is an AI's goal-setting any different from a human's? How is an AI setting up a "life plan" to achieve that goal any different from a human doing the same thing?

Apart from that one self-aware computational process is done in silicon, and another in meat?

But maybe you think there's some mysterious element to consciousness that can't be captured in a machine - that it's theoretically feasible that you could have what philosophers recently dubbed as "Zombies" - AIs/robots that were totally indistinguishable from human beings, in that anything a human being could be observed doing, such an entity could be observed doing, only one has an "inside", an "inner life", and the other doesn't?

I think that if you have a look into the science, you'll see that we're really getting quite far along the road to creating "expert systems" AIs already, and tasks like locomotion, identifying things correctly, etc., are proceeding apace. And self-monitoring is involved in all these kinds of systems. The separate aspects of conscious thought, IOW, are on the way to being done - even if they can't do it now, they have roadmaps, which they didn't have a few decades ago (when it was all mysterious and murky, and people were basically fumbling around with heirarchical AIs arranged according to "laws of thought" and that kind of thing). But since the idea of modularity has taken over, things have progressed quite quickly.

You've got to think: we are evolved creatures. With an evolved entity, you can't have the computation be too energetically expensive - the entity's activity has had to be energetically/"economically" feasible all along the way. This has resulted in our brains being collections of cheap and cheerful gadgets, strapped together by heirarchical control systems more simliar to what we usually think of as cognitive system. If that's how nature does it, it makes sense to do it that way too.

Share this post


Link to post
Share on other sites
She said direct perception is self-evident. Does she mean that self-evident equals direct perception / awareness?

Its not clear to me how she distinguishes from percepts and sensations.. those two have the same translation in German.

Yes, self-evident equals direct perception, direct awareness of entities or objects.

See if the entries in the Lexicon on Perception and Sensations. I think the brief paragraphs should help to clarify the distinction between sensations and perceptions.

Sensations are not perceptions, and perceptions are not conceptions. To understand Objectivism, it's important to grasp the distinctions between these forms of awareness, sensations, percepts and concepts. (I think it's important to grasp the distinctions even if you're not looking to understand Objectivism. It's generally important to understand the meaning the the concepts one uses.)

Perception is not conception. A child can see and play with a red rubber ball without having the conceptual knowledge that it's a "red," "rubber," "ball." The red rubber ball exists, and the child can perceive it as an object, without having the concept of "perceive" or "object," etc. And the child can distinguish the red rubber ball from other things. He can do all of this via direct perception or direct awareness.

I think we are broadly on the same side here. The example of the child is a scientific one, in my opinion. The feeling of touch, smell, taste and the image we see of that apple are not pure "raw" data, but are interpretations of that date by certain brain-regions.

I am not sure if these specific brain-regions actually adapt over time and change our perceptions. Evidence could be that people who loose their eyesight tend to develop a more acute hearing.

What do you mean: "The example of the child is a scientific one, in my opinion."?

Also, what do you mean by, '"raw" data'?

Is it your view that when we see an apple, we don't actually see the apple, we only see an image of the apple?

Concerning direct awareness of attributes of objects:

Well, you certainly are aware, that a certain object looks like it has 4 edges and looks like a stone.

Is it therefore self-evident that there is in fact a stone with 4 edges in front of me? Certainly not; it might be a hallucination or a illusion.

The information I got about this object is real (something must have caused me to see what I see) and some of the information about it must be true ("To me, it looks like a stone").

Are the physical attributes of something that I see self-evident? No (illusions, hallucinations).

I think that you are confusing perception or direct awareness with identification, which is conceptual. Perception, direct awareness is valid, perception results from an interaction between our senses and that which we are aware of. Only our conceptual level of awareness is fallible.

Concepts such as "illusions" and "hallucinations" are only possible if you first are capable of valid identifications, only if you can distinguish between illusions and hallucinations and valid identifications. That we can be in error does not mean that we have good reason to suspect that we are always in error.

Not really.. or I'm not sure what he means by "evidence". Is a mathematical axiom considered to be "evidence"? I think the word does not fit there.

I don't think you need the criteria of evidence, because the laws of logic alone tell you, that you need a starting point in form of an axiom and "evidence" is a too narrow term.

There is no "evidence" for a mathematical axiom flying out there in the universe and yet once you found a proof in math (without errors), it will be true forever (with the same axioms).

I have no idea what you're asking, "Is a mathematical axiom considered to be "evidence"?" Evidence of what?

What I mean by evidence is data that supports a logical conclusion. Evidence is the data that makes it possible to prove a hypothesis. To prove that the suspect is the murderer, you need evidence to prove it, evidence and logic. Evidence in the case of a murder would be things like data that connects the suspect to the crime conclusively.

Existence: certainly.

If you agree that we are directly aware of existence, what do you mean if not that we are directly aware of objects and attributes of objects? Do you think that "existence" is an attribute that can be separated from objects such that you can have two apples, for example, each one fully an apple excepting that one has the attribute of "existence" and the other doesn't?

To be directly aware of existence is to be directly aware of objects and their attributes.

Share this post


Link to post
Share on other sites
Well, the scientists working on these projects don't think so. The consensus has been for a while that they should work "from the ground up", and make simple systems that do simple jobs, and integrate them into a structure. That seems plausible in that it's how we're built too. We'll see.

The "integrate them into a structure" part of your position is the sticking point, and far more complicated than your conjunction implies. We may very well advance to the stage of a single hyper-alert machine able to tell us all the qualia of its surroundings, but that no more represents consciousness than several different machines each reporting its respective measurements. A camera does a good job of capturing light, and we could hook one up to a computer which could cross-match the shapes that captured light represents, but that doesn't mean we've created a consciousness that sees.

If you choose to not act towards a goal, or act against it, you have simply replaced one goal with another you value more (an it be death). How is that essentially different from this (notional type of) AI re-programming itself and updating its goals? How is an AI's goal-setting any different from a human's? How is an AI setting up a "life plan" to achieve that goal any different from a human doing the same thing?

No, I haven't necessarily replaced one goal with another if I choose to not act toward any particular goal - I've merely chosen not to act toward a particular goal. Nothing need replace it. For example, I can sit on my couch, eat Bon-bons and watch Leave it to Beaver re-runs all while having the goal of becoming rich and famous. Chances are my goal won't be realized, but that's a result of my volition - my choice to not act toward that goal. People do it all the time, and if you need evidence just go to an Obama rally.

Contrast that with a programmed goal setter, which is, first and foremost, programmed to follow its programming. Regardless of whether it changes its goals, it can never change the fact that it is programmed to seek those goals and must follow its programming. There is no volition. It simply can't choose to watch the Beav's antics if its programming requires it to seek wealth and fame, regardless of how many different iterations of goals it has gone, or can possibly go, through. It might re-program itself to watch, but it is still following its programming.

Share this post


Link to post
Share on other sites
The "integrate them into a structure" part of your position is the sticking point, and far more complicated than your conjunction implies. We may very well advance to the stage of a single hyper-alert machine able to tell us all the qualia of its surroundings, but that no more represents consciousness than several different machines each reporting its respective measurements. A camera does a good job of capturing light, and we could hook one up to a computer which could cross-match the shapes that captured light represents, but that doesn't mean we've created a consciousness that sees.

Again, if it identifies correctly what it sees, how is that any different from us correctly identifying what we see? Where is the essential difference? It is aware of its environment, it knows what's in its environment.

What's different is the vast heirarchy of nested goals and sub-goals that we have. But there is no reason in principle why an AI can't have such complexity.

No, I haven't necessarily replaced one goal with another if I choose to not act toward any particular goal - I've merely chosen not to act toward a particular goal. Nothing need replace it. For example, I can sit on my couch, eat Bon-bons and watch Leave it to Beaver re-runs all while having the goal of becoming rich and famous. Chances are my goal won't be realized, but that's a result of my volition - my choice to not act toward that goal. People do it all the time, and if you need evidence just go to an Obama rally.

Wouldn't Objectivism say that choosing that sort of passivity is implicitly (perhaps sub-consciously) choosing death?

Contrast that with a programmed goal setter, which is, first and foremost, programmed to follow its programming. Regardless of whether it changes its goals, it can never change the fact that it is programmed to seek those goals and must follow its programming. There is no volition. It simply can't choose to watch the Beav's antics if its programming requires it to seek wealth and fame, regardless of how many different iterations of goals it has gone, or can possibly go, through. It might re-program itself to watch, but it is still following its programming.

Hmm, see to me this looks like the beginning of a "saving appearances" type of argument (no offence, just a way of encapsulating quickly what I mean).

Share this post


Link to post
Share on other sites
Yes, self-evident equals direct perception, direct awareness of entities or objects.

See if the entries in the Lexicon on Perception and Sensations. I think the brief paragraphs should help to clarify the distinction between sensations and perceptions.

Sensations are not perceptions, and perceptions are not conceptions. To understand Objectivism, it's important to grasp the distinctions between these forms of awareness, sensations, percepts and concepts. (I think it's important to grasp the distinctions even if you're not looking to understand Objectivism. It's generally important to understand the meaning the the concepts one uses.)

Perception is not conception. A child can see and play with a red rubber ball without having the conceptual knowledge that it's a "red," "rubber," "ball." The red rubber ball exists, and the child can perceive it as an object, without having the concept of "perceive" or "object," etc. And the child can distinguish the red rubber ball from other things. He can do all of this via direct perception or direct awareness.

Thank you for the links.

So sensations are by definition only "used" by entities without awareness (like insects), while perceptions require an awareness and integration of the data in the brain?

Ok, I can work with that.

What do you mean: "The example of the child is a scientific one, in my opinion."?

Also, what do you mean by, '"raw" data'?

Is it your view that when we see an apple, we don't actually see the apple, we only see an image of the apple?

Well, if we are talking the same apple, then the source of the information is obviously the same for an adult an for a child. If you mean that by "given", then yes.

It is obvious though, that we only perceive a fraction of the possible information about an object (only a small spectrum of visible light for example). Now if, as you said, the eyes work the same in the adult and the child, it does not follow that the perceptions must be the same.

This information that the senses are sending to the brain is what I meant with "raw data". This "raw data" must be the same for the adult and the child, but this "raw data" does not create perceptions. The senses do not produce the image of the apple in our minds.. they are just a part of that.

The raw data must be interpreted by the brain. If you damage the the brain region that deals with interpreting the data from the eyes, then you see nothing.

Now your question was if knowledge can change the perception of the apple. Well maybe the perception of the apple does not change, when you know how it is grown or how it's biochemical setup is.

I think though, that we can consciously influence the brain regions that are used for interprating the "raw" data from the senses and therefore change our perceptions.

A deaf boy actually managed to "develop" a kind of echo-sounding system and I think it is unlikely to think that he just happens to have exceptionally good ears; it is much more likely that the brain adapted after years of training.

Another interesting example: If you put on glasses that let you see everything upside-down and wear them for a certain amount of time, your brain adapts and you see things normally again. If you take the glasses off then, you see everything upside-down again until your brain readapts.

So even if the child and the adult have the same eyes, ears, and sensory cells for touch, it does not follow that the perceive the apple the same way.

I think that you are confusing perception or direct awareness with identification, which is conceptual. Perception, direct awareness is valid, perception results from an interaction between our senses and that which we are aware of. Only our conceptual level of awareness is fallible.

Concepts such as "illusions" and "hallucinations" are only possible if you first are capable of valid identifications, only if you can distinguish between illusions and hallucinations and valid identifications. That we can be in error does not mean that we have good reason to suspect that we are always in error.

Didn't I say the same thing, just without objectivist terms?

I don't know what you are trying to say with your second-last sentence.. Sure that is true for the concept of illusion and hallucination, but you don't have to understand the concept of an illusion to experience one.

I have no idea what you're asking, "Is a mathematical axiom considered to be "evidence"?" Evidence of what?

What I mean by evidence is data that supports a logical conclusion. Evidence is the data that makes it possible to prove a hypothesis. To prove that the suspect is the murderer, you need evidence to prove it, evidence and logic. Evidence in the case of a murder would be things like data that connects the suspect to the crime conclusively.

I think a mathematical proof is in fact a proof in the definition of the word proof. Peikoff said that you need evidence to conduct a proof and I don't think that word works well in terms of math.

The proof for Ferman's Last Theorem only required the logical methodology of math and the mathematical axioms. That's why I think Peikoff's definition is too narrow.

Existence: certainly.

If you agree that we are directly aware of existence, what do you mean if not that we are directly aware of objects and attributes of objects? Do you think that "existence" is an attribute that can be separated from objects such that you can have two apples, for example, each one fully an apple excepting that one has the attribute of "existence" and the other doesn't?

To be directly aware of existence is to be directly aware of objects and their attributes.

Well as I said: We are directly aware of _certain_ attributes of objects like

a) It causes me to think that it is a stone

:) It causes me to smell the scent of a flower

Is it therefore self-evident that it is a flower or a stone? No.

Existence is always self-evident because no matter what I perceive, I perceive something.

and by the way.. could you please tell me where we are going in terms of discussing my questions about free will?

Also I did listen to those Peikoff podcasts, but they didn't address my issues at all.

Share this post


Link to post
Share on other sites
Again, if it identifies correctly what it sees, how is that any different from us correctly identifying what we see? Where is the essential difference? It is aware of its environment, it knows what's in its environment.

It wouldn't know what's in its environment. It would only analyze, and report, what its sensors measure. If I give a baby a picture of a dog, and a picture of a cat, then parade a dog past him, the baby will probably indicate the picture of the dog (point to it, pick it up, etc.). Is this "knowing?" I can scan a picture of a person into a computer, the computer can then run through its entire database of pictures and find one that matches. Is this "knowing?" Is it consciousness?

The problem is in bringing all that data together into a discrete unit. The robot may use light sensors to capture light reflecting off an object, and check that light against its database to find a match. If it could speak it might say, "Looks like a dog carpet table wall window." Other sensors might capture a sampling of the various chemicals in the air. If it could speak it might say, "Smells like oxygen, nitrogen, methane, carbon dioxide, freon, ... " and continue listing chemicals, probably thousands more. Without going through the other senses, you can already see the difficulty. How is the robot to tie all these different measurements together into a single entity? There is where the difficulty lies. For the robot, all the data it captures is separate and distinct - the data collected through its light capturing device has no relationship to the data collected through its chemical capturing device. Currently, robots are unable to pick out the data which is related - they are unable to separate "dog" from "carpet table wall window," and then put that together with "carbon dioxide" and whatever other chemical might indicate a dog, plus whatever sounds might indicate a dog, plus whatever tactile data might indicate a dog.

Our senses do basically the same thing. They capture a lot of data, and they do so constantly. Somehow we're able to integrate these literally millions of different data points into discrete units. We're able to make the connection between all the different data which would indicate a dog, and ignore all the different data which does not. Robotic scientists are finding it difficult to replicate this.

What's different is the vast heirarchy of nested goals and sub-goals that we have. But there is no reason in principle why an AI can't have such complexity.

I'm not sure why you put this with the previous quote, since we weren't talking about goals then. However, it made me think of another problem with such a robot: it's not gone through any type of concept formation, all its concepts have simply been given to it. Apart from the problem of integrating all the different data points into a single discrete unit, you also have the problem of Conceptual Common Denominators. Even if we assume the robot could discriminate between an object and its surroundings, it's still merely taking measurements, but often times measurements are ignored in the identification of an entity. For example, the data available to the robot may be the following: temperature of object - 38C, height - 3ft, color - brown, length - 7ft, etc. Now, what has it described? Those measurements could be the measurements of a very large dog, or a very small horse. What makes it a dog?

In order to get past that, the robot would have to form its own concepts - it would have to learn through experience what makes a dog a dog, and what makes a horse a horse. It needs a hierarchy of concepts. It needs to have established Conceptual Common Denominators for each concept, which only comes through concept formation. The paradox lies in the fact that it needs to ignore measurements when measurement is all it can do.

Wouldn't Objectivism say that choosing that sort of passivity is implicitly (perhaps sub-consciously) choosing death?

I don't know. You would have to ask one of the Objectivism scholars in here. What I do know is that it is entirely possible for a conscious human to do it. Volition, if no where else, is indicated here by our ability to choose to work against what we purport to want, and even need. A robot is incapable of doing that.

Hmm, see to me this looks like the beginning of a "saving appearances" type of argument (no offence, just a way of encapsulating quickly what I mean).

Well, I'm not sure what you mean, but you haven't answered the point. A robot can have no volition if its actions are dependent upon programming.

Share this post


Link to post
Share on other sites
and by the way.. could you please tell me where we are going in terms of discussing my questions about free will?

A few messages back, you said, "I don't doubt that there is no such thing, that is self-evident. I doubt that free will is self-evident."

I asked you for clarification because of the double negatives -- "don't doubt that there is no such thing". (When I removed the double negatives, trying to understand your statement, you had said that you do doubt that there is such a thing as self-evident.)

You clarified by saying: "I think there are self-evident concepts or entities, like existence and consciousness. They can't be refuted without using them, they form the base of our thinking.

They are axioms and therefore can not be proven. Free will is not self-evident or axiomatic in that sense."

But even that response was confusing to me, so I have been trying to make sure that you and I have an agreement on what is and what is not self-evident.

If we don't have such an agreement, I don't see how it's possible to answer, to our mutual satisfaction, your "doubt that free will is self-evident."

Make sense?

There's two issues involved in your doubt about free will:

1. what is free will?

2. what is self-evident?

Objectivism holds that free will (or "volition" or "choice") is self-evident, that it's directly evident to you that you have the power of choice. If you and I do not have a common, satisfactory understanding of what "self-evident" means, then we're not going to be able to address your doubt as to the self-evidence of free will. I'd be saying that free will is self-evident, and you'd be saying that it's not, all because you and I do not agree on what "self-evident" means. (At least that could be part of the source of our disagreement.)

I'm not a professional philosopher. I do not teach philosophy. But I have a long-standing interest in philosophy, specifically Miss Rand's philosophy of Objectivism. Perhaps I have enough understanding of her philosophy to help clarify your doubt about free will; perhaps I don't. I'm willing to try, but I have to do so in an unpracticed manner, figuring it out as I go. I don't have a lot of philosophical discussions with others, nor do I really care to -- they're very time consuming. So there's no script; I'll do what I can and I am willing to do, and you'll either find my efforts helpful or not. We will just have to see how it goes.

If you and I were on agreement with respect to self-evident, and you and I were both assured that we can be confident of that which is self-evident, and then if you and I both saw that free will is self-evident, then that would settle the question.

As it stands, I still do not think that we have a common understanding of self-evident, and so, again, it seems that it would only be more confusing to bring in free will to the discussion at this point.

Otherwise, you tell me, what would it take for you to be convinced that you do in fact have free will and that it is a self-evident fact?

Edited by Trebor

Share this post


Link to post
Share on other sites

I think this sounded too harsh. I just wanted to make sure that we don't totally drift of :lol:

I don't think that self-evident is the core of my disagreement. I'm not only doubting that free will is self-evident, I doubt that one can make _any_ reasonable claim that it must exist. I think it is unprovable.

As I said in my post that started our conversation:

I don't know what free will is supposed to be on the most fundamental level, in the term of causality; of cause and effect.

I know that free will stands for the ability to make different choices when faced with the exact same problem and that this ability is not an instance of randomness.

Here is where my misunderstanding starts. What can causality be, when it is neither determinism nor chance?

I think any third option is unthinkable and is at best a mixture of determinism and chance, at worst a form of mysticism.

Or tell me:

What do you perceive that leads you to the conclusion that you could have made a different choice?

What is free will, when it is not random and not determined?

Share this post


Link to post
Share on other sites
What do you perceive that leads you to the conclusion that you could have made a different choice?

What is free will, when it is not random and not determined?

I think you have a false standard, since you keep going back to if you could have made a different choice had everything been exactly the same. Since that cannot happen, it is not a proper standard. The proper standard is observing yourself making choices and directing your own consciousness.

As to what is it if it is not determined and not random, well, it is causal. You have certain capabilities because of what you are. You are a human being and have the ability to direct your own consciousness -- and to choose to think consistent with reality or against it or not at all.

On a philosophical level, that's all there is to it. The details about what components (perhaps of our brain) that make these abilities possible is for science to figure out.

Share this post


Link to post
Share on other sites
I think this sounded too harsh. I just wanted to make sure that we don't totally drift of :lol:

I'm unsure of what you mean by "this" in "this sounded too harsh." Were you referring to what you said to me, or to what I said to you?

I did not find you to be harsh, nor did I mean to be harsh in reply.

I read your reply, but at the end you basically wanted to know where this was all going, which is understandable. So I responded to you with the realization that your concluding question was most important.

I don't think that self-evident is the core of my disagreement. I'm not only doubting that free will is self-evident, I doubt that one can make _any_ reasonable claim that it must exist. I think it is unprovable.

As I said in my post that started our conversation:

I don't know what free will is supposed to be on the most fundamental level, in the term of causality; of cause and effect.

I know that free will stands for the ability to make different choices when faced with the exact same problem and that this ability is not an instance of randomness.

Here is where my misunderstanding starts. What can causality be, when it is neither determinism nor chance?

I think any third option is unthinkable and is at best a mixture of determinism and chance, at worst a form of mysticism.

I have a question for you. You say that you do not reject the self-eident (now you say that you don't think that self-evident is the core of your disagreement), and you've said that you accept "existence" as self-evident.

Why? How do know that there is existence? How do you prove there is existence? How do you know that existence is not an illusion?

You say that you have a disagreement, but that self-evident is not the core of your disagreement, and you say that you doubt that free will is self-evident, and you doubt that there's a reasonable claim that free will must exist.

You're certain that you have a disagreement, yes? And you're certain that you doubt that free will is self-evident and that there's a reasonable claim that free will must exist?

How do you know that you're certain?

Is it self-evident?

Do you trust your own awareness of your disagreement and your doubts?

Or tell me:

What do you perceive that leads you to the conclusion that you could have made a different choice?

What is free will, when it is not random and not determined?

I'm directly aware of my making choices, just as, presumably, you are directly aware that you have an illusion of making choices.

How do you determine what's an illusion? And illusion versus what?

Share this post


Link to post
Share on other sites
I have a question for you. You say that you do not reject the self-eident (now you say that you don't think that self-evident is the core of your disagreement), and you've said that you accept "existence" as self-evident.

Why? How do know that there is existence? How do you prove there is existence? How do you know that existence is not an illusion?

Even if existence (or everything I perceive) is an illusion, the illusion exists. The fact that I exist must follow that something exists and that I am conscious.

You say that you have a disagreement, but that self-evident is not the core of your disagreement, and you say that you doubt that free will is self-evident, and you doubt that there's a reasonable claim that free will must exist.

You're certain that you have a disagreement, yes? And you're certain that you doubt that free will is self-evident and that there's a reasonable claim that free will must exist?

How do you know that you're certain?

Is it self-evident?

Do you trust your own awareness of your disagreement and your doubts?

What is you point here?

The validity of an argument does no change when the speaker is not certain or has doubts.

In my current state of mind, with the information that is available to me right now: Yes I'm certain of my possition. It is the solution my mind came up with concerning the problem.

I'm directly aware of my making choices, just as, presumably, you are directly aware that you have an illusion of making choices.

How do you determine what's an illusion? And illusion versus what?

Choice here means to pick between multiple options and in the context of free will it means that I can make different choices.

Let's say I stand at a shop and I'm unsure what flavor of ice-cream I should pick. Finally I choose chocolate.

How can you _know_ that you could have picked strawberry instead? You will never be in the exact same situation again, ever.

Yes, when I'm faced with a problem, it feels like I'm choosing between multiple options. Does that follow I could pick multiple options? No. It does follow, that I can not predict myself. It does follow that our mind does not instantly come up with a solution to a certain problem.

Illusion may be the wrong word here. We certainly have the feeling of free will. In that sense it is real, but the conclusions drawn from that fact are wrong.

Let's say a man is mentally ill and hears a voice inside his head that proclaims it is god. Is hearing the voice self-evident? Yes it is. Is it self-evident that the voice is in fact the voice of god? Of course not.

Is the feeling of free will self-evident? Yes, it is.

Is it self-evident that we in fact posses free will? No, it is not.

I think the description of what we observe via introspection is wrong. Free will is not a description of what we observe, it is a (false) conclusion.

I think, what we in fact observe is:

a) Our mind is constantly faced with problems

:) Our mind comes up with solutions for these problems (what chess move should I do next?)

c) We can not predict ourself (in the sense that during the process of solving the problem we do not yet know which solution we are settling on)

Share this post


Link to post
Share on other sites
Let's say I stand at a shop and I'm unsure what flavor of ice-cream I should pick. Finally I choose chocolate.

How can you _know_ that you could have picked strawberry instead? You will never be in the exact same situation again, ever.

After all that time deciding between the two, what makes you think you could never have chosen strawberry to begin with? If you could only have chosen chocolate, then why did it take you so long to make up your mind? It seems like you're saying that your mind is this impersonal, unconscious force that guides your actions without any input from "you", whatever that would be apart from your mind. What reason do you have, other than the possibility that our perception of choice could be mistaken, to say that it is, indeed mistaken? Why couldn't you have picked strawberry instead, or chosen a double-scoop so you could have both or something? There they both were, after all, there's no physical reason why you couldn't have chosen either way. "Your mind" doesn't come up with things and spit out the answers to you; "You" come up with things, using your mind. Just like your arms don't lift things, then you figure out what to do with them - you lift things, using your arms. This whole "free will might just be an illusion" thing makes me wonder - do you agree with the axiom of existence, but not of identity? That "something" surely exists, but that you have no capacity to figure out what it is?

Share this post


Link to post
Share on other sites

I did not say, that I will only choose chocolate in this situation. This is just as baseless, as stating that I could have chosen a different flavor.

As I said, the reason why it took me some time to decide is that I (or my mind) can't instantly solve a problem. I need time to process the information I posses to come up with the answer to the question: "what flavor will satisfy my need for ice-cream the most?".

I never intended to make a distinction between "my mind" and "I". They are the same thing.

I agree with the law of identity. I don't understand your point in the last two sentences. I think free will is unprovable and it is baseless to proclaim it exists (or that we posses free will). Furthermore I think the concept of free will is unthinkable in terms of causality.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...