Jump to content
Objectivism Online Forum

Questions on Peikoff's Objectivism: The Philopsophy of Any Rand

Rate this topic


Recommended Posts

A while ago I read Rand's The Fountainhead, and I was impressed enough by the character of Howard Roarke to read more about the philosophy behind it, so I am now reading through Peikoff's Objectivsim: The Philosophy of Any Rand . I'd like to use this thread to post questions as they come up if that's alright.

My first question concerns the section of Chapter 7 subtitled "Life" as the Essential Root of "Value." First a couple of quotes from the book.

"The conecpt of value presupposes an entity capable of acting to achieve a goal in the face of an alternative... Goal-directed behavior is possible only because an entity's action, its pursuit of a certain end, can make a difference to the outcome." (Peikoff, p. 208)

"Ayn Rand describes the alternative of life or death as fundamental... The alternative of existence or nonexistence is the precondition of all values. If an entity were not confronted by this alternative, it could not pursue goals, not of any kind." (Peikoff, p. 209)

I am having difficulty accepting the statement that "The alternative of existence or nonexistence is the precondition of all values." I am tempted to read on and put this issue aside for now, but I don't think I can do that since it seems that it will form the foundation of the rest of the chapter.

To illustrate the point that all values are conditioned on the alternative of existence or nonexistence, Peikoff offers Rand's example of an immortal robot; supposedly such a being would have no values since it does not have any alternative to existence. Even psychological values such as the acquisition of knowledge would be irrelevant since knowledge would not help it to achieve its ends, since it has no ends.

It seems to me that the assumption that the immortal robot has no ends is begging the question, i.e. by assuming the robot has no ends, the argument assumes that existence itself is the only thing worth valuing in order to prove that existence is the only thing worth valuing. Peikoff gives an example on p. 210 that because the robot is not required to be on time for anything, it would therefore not value a Rolex, but I think that's demonstrably untrue; nobody buys a Rolex so they will be on time for their meeting; a Seiko would do just fine for that. They buy the Rolex for their own pleasure, even though it doesn't enhance their ability to exist. We could go a step further and look to heroin use, which involves people valuing pleasure as a goal in itself even though it decreases their ability to exist. How can it be said of such an individual that he "[does] not exist in order to pursue values. [He] pursue value in order to exist?" (Peikoff, p. 211)

I don't see why even an immortal robot, for whom non-existence is impossible, could not place value on things such as knowledge or wealth for their own sakes, or for the sake of some goal other than existence. Any insight you all have would be appreciated.

Edited by Darkside
Link to comment
Share on other sites

My first question concerns the section of Chapter 7 subtitled "Life" as the Essential Root of "Value." First a couple of quotes from the book.

"The conecpt of value presupposes an entity capable of acting to achieve a goal in the face of an alternative... Goal-directed behavior is possible only because an entity's action, its pursuit of a certain end, can make a difference to the outcome." (Peikoff, p. 208)

"Ayn Rand describes the alternative of life or death as fundamental... The alternative of existence or nonexistence is the precondition of all values. If an entity were not confronted by this alternative, it could not pursue goals, not of any kind." (Peikoff, p. 209)

I am having difficulty accepting the statement that "The alternative of existence or nonexistence is the precondition of all values."

It seems to me that the assumption that the immortal robot has no ends is begging the question, i.e. by assuming the robot has no ends, the argument assumes that existence itself is the only thing worth valuing in order to prove that existence is the only thing worth valuing.... We could go a step further and look to heroin use, which involves people valuing pleasure as a goal in itself even though it decreases their ability to exist. How can it be said of such an individual that he "[does] not exist in order to pursue values. [He] pursue value in order to exist?" (Peikoff, p. 211)

I don't see why even an immortal robot, for whom non-existence is impossible, could not place value on things such as knowledge or wealth for their own sakes, or for the sake of some goal other than existence. Any insight you all have would be appreciated.

The immortal robot is an example, not an argument. It is an illustration of the point. Of course it assumes the position it sets out to embody. That is not a fallacy.

Your heroin user pursues the value of relief/feeling of well-being. He pursues a counterfeit of successful living. He evades the knowledge that what he gains will be short-lived and overall leave him worse off. It is an irrational choice, but it a choice aimed at life-values.

Think of being unable to feel pleasure or pain, comfort/illness/hunger/boredom/etc. Unable to have sensory or bodily feelings at all, just knowledge-related ones. Would you act at all? Why? If you answer that you would act because you have the knowledge that you should do so, what, I would then ask, is the meaning of this "should?" If it is some authority, why do you heed them? If it is your own thought, I would ask, you believe you "should" so that what?

You will see that it is the issue of life and death that answers that "so that what?" You speak of the robot's valuing knowledge for its own sake. What is that? What is the sake of having knowledge? In order to act successfully. And why does one wish to act successfully? To survive or flourish. What, otherwise, is the "sake" of having knowledge? If it "pleases" the robot, you are back to pleasure and pain, the programming man has by virtue of his being an animal, which the robot isn't, and can't experience.

Choice implies a standard. If the alternative of living, feeling pleasure and satisfaction, versus dying, feeling pain and disease, etc. were not fundamental to your makeup, what standard could you craft, and why would you adhere to it if you did?

Only living things prefer, only they value.

Hope this helps.

Mindy

Edited by Mindy
Link to comment
Share on other sites

My take on it:

"I don't see why even an immortal robot, for whom non-existence is impossible, could not place value on things[...]"

What is a value? It is that which a living being acts to gain and/or keep.

What is life? It is a process of self-sustaining and self-generated action.

A living being is an entity which possesses the essential distinguishing attribute of life.

An immortal robot does not have that attribute and so what it acts towards cannot be values.

And also, the reason why it couldn´t enjoy anything is becase a being so constructed would have nothing built into it´s physicality to enable it experience anything good, since that would require that there first of all be anything good FOR IT out there in reality. But the robot already has everything, nothing can add to it´s existence, in the way a value can add to a life, so how could it ever experience the sensation of such a thing?

My suggestion is that you just keep reading and most importantly keep defining the words you encounter in Objectivism since that´s very important to understanding it. There is one section in OPAR (Chapter 4 somewhere) which explains and "reduces" the concept value I think you will find that part to help you. Don´t hesitate to read the chapters on epistemology (If you haven´t already) they are the best part of the book IMO.

Link to comment
Share on other sites

I don't see why even an immortal robot, for whom non-existence is impossible, could not place value on things such as knowledge or wealth for their own sakes, or for the sake of some goal other than existence. Any insight you all have would be appreciated.

Well your immortal robot is impossible; not it's non-existence.

I think that it would just suffice to say that for a hypothetical machine,

the basis for it's values would be different than that of a living animal

(i.e. you or me).

Link to comment
Share on other sites

Peikoff gives an example on p. 210 that because the robot is not required to be on time for anything, it would therefore not value a Rolex, but I think that's demonstrably untrue; nobody buys a Rolex so they will be on time for their meeting; a Seiko would do just fine for that. They buy the Rolex for their own pleasure, even though it doesn't enhance their ability to exist.
You may be correct in saying that many or perhaps most people buy a Rolex for the sheer pleasure of owning a Rolex, but that is not a reflection of a rational approach to value: it inverts cause and effect. If I were to buy a Rolex, given how much those puppies cost, the pleasure I would derive would be in relation to its symbolic value as an indication of my success as an businessman. Actually, since I am not a businessman and I don't make the kind of salary that would allow me to buy such an expensive bauble, I cannot buy a Rolex, and the only way I could come into possession of one would be if I acquired one by accident (e.g. an untraceable wealthy man happens to drop one in the garden in front of my house). For me (realistically speaking), being in possession of a Rolex would not be a rational pleasure. The Rolex stands in no relation whatsoever to me accomplishing my goals, that is, succeeding at something, so it cannot be a source of pleasure.

The concept of "value" requires there to be a choice with the ultimate alternative. In the made-up case of the robot, I cannot imagine what an "ultimate alternative" would be: ex hypothesii, the robot can never cease existing. On a day-to-day basis, it might face the alternative of walking the dog versus killing the dog, but the choice cannot be evaluated in terms of an ultimate alternative. Without a true ultimate alternative, there is no standard for evaluating actions.

Link to comment
Share on other sites

It seems to me that the assumption that the immortal robot has no ends is begging the question, i.e. by assuming the robot has no ends, the argument assumes that existence itself is the only thing worth valuing in order to prove that existence is the only thing worth valuing.

Well, using an imaginary robot to prove that would be a fallacy in itself, no need to go as far as analyzing the "argument" in any more depth. But I agree with Mindy, the example isn't intended to prove anything. That leaves us to decide whether it is a good example or not, after we accept the premise of the chapter.

As for that premise, it is explained before the robot is introduced. You have to understand how Ayn Rand defines the concept of value (“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.), and accept (or contest, if you see fit) that "There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms.". (both quotes are from Rand's The Virtue of Selfishness, because I don't have access to Peikoff's book right now)

From there, it follows that only living, mortal beings can hold values. In the case of an immortal robot, the concept (as it is objectively-meaning "built on reality"- defined by Rand) would be a floating abstraction (Since there is no fundamental alternative upon which he could construct additional alternatives for himself. So any alternative he may think of can be defeated simply by asking him Why? over and over again ("Why do you need a Rolex?", "Because it tells time and makes me look good.", Why do you need that?", etc.), until he runs out of a meaningful answer. And he does run out, unless you can identify a fundamental alternative that he faces, that is a part of reality rather than his subjective imagination.

Obviously, the robot (just like men, but unlike other animals and plants) can have randomly chosen, subjective goals, and attach "values" to those goals, but those values would not be built on reality, but rather on consciousness as an entity separate from reality. Of course, the imaginary robot is separate from reality to begin with, so at this point the metaphor does indeed reach its limitations. But it served its purpose.

Edited by Jake_Ellison
Link to comment
Share on other sites

A very good and intelligent question.

You can think of it like this: For something to be a value to you, it has to make some difference to you whether you gain or keep it. Now, for whatever thing you consider you can always ask this question: What difference does it make? Now, take your life. What difference does it make whether you live or die? If you realize that this difference make "all the difference", that nothing would make any difference to you one way or another, if you died, then you realize that it is because you, as a living being, face this basic alternative between life or death that things ultimately matter to you, one way or another.

Now, what about the immortal and indestructible robot? It is good that you recognize the robot is not the argument. The robot is an illustration of what happens when you remove the basic alternative between life and death.

Now, in fact, nothing could make any difference to the immortal and indestructible robot. Nothing. After all, what difference does it make whether it has a clock or not? Whether it eats something or not? Whether it reads the news or not? Study economics? Make a living or not? Whether it finds a sexy female robot or not? Whether it has a "good time" or not? None of it makes, in the end, any difference to this entity. It does however make a real difference to humans; it makes our lives better or worse.

The immortal robot has no ends, no values, no goals because it has no survival needs, because nothing literally makes any difference for or against it. It does not have any physical needs nor any psychological needs.

Now, what about the fact that you can imagine the robot wanting all sorts of things? That means nothing. Imagination is not reasoning. A fantasy is not an argument. So it does not matter what you can imagine. You can imagine whatever you want and it would not mean anything or prove anything.

The fact that you can imagine something does not make it logically possible. I can, for instance, imagine that I can fly if I flap my arms, yet in reality that is logically impossible. Imagination does not change the facts. The facts are that it is only for living beings things matter, one way or another, because something is ultimately at stake for them, namely their lives.

What about the idea that the robot can value some things as "an end in itself", such as allegedly knowledge or wealth? Can the robot not value something even if it does not make any difference to it or not? This idea implies intrinsicism. But intrinsicism is false from start to finish; intrinsic values are impossible and nonsensical. That is why they are impossible to prove; there are no facts in reality that gives rise to any "intrinsic values". (How do you _know_ something is good if it is not good for anybody or for any purpose?) Values presuppose a valuer. Intrinsicism denies this. Yet to talk about values without valuers is to steal the concept value. So the whole proposition is a contradiction.

For a longer and more thorough elaboration on all this, and more, I refer you to Don Watkins:

http://forum.ObjectivismOnline.com/index.php?showtopic=8254&view=findpost&p=95797

One more suggestion to get over your troubles with the robot: stop anthropomorphize the robot. I think the reason some people have a hard time getting over the fact that the robot, or any other thing similar to it, such as a "stone with arms and legs", cannot value anything, is because they automatically assume the things is like a human. It is not.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...