Jump to content
Objectivism Online Forum

Roderick Fitts

Regulars
  • Posts

    130
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    Roderick Fitts got a reaction from jacassidy2 in Reblogged: Objections to the Axioms (Part 2)   
    This next objection is about the utility of the axioms.  
    Objection: “Axioms Must Have Deductive Implications”
    Continue...

    Link to Original
  2. Like
    Roderick Fitts got a reaction from jacassidy2 in Reblogged: Objections to the Axioms (Part 1)   
    <div dir="ltr" style="text-align: left;" trbidi="on"><!--[if gte mso 9]><xml> <o:OfficeDocumentSettings> <o:AllowPNG/> </o:OfficeDocumentSettings></xml><![endif]--><br><!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves/> <w:TrackFormatting/> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF/> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>JA</w:LidThemeAsian> <w:LidThemeComplexScript>TH</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> <w:SplitPgBreakAndParaMark/> <w:EnableOpenTypeKerning/> <w:DontFlipMirrorIndents/> <w:OverrideTableStyleHps/> <w:UseFELayout/> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math"/> <m:brkBin m:val="before"/> <m:brkBinSub m:val="--"/> <m:smallFrac m:val="off"/> <m:dispDef/> <m:lMargin m:val="0"/> <m:rMargin m:val="0"/> <m:defJc m:val="centerGroup"/> <m:wrapIndent m:val="1440"/> <m:intLim m:val="subSup"/> <m:naryLim m:val="undOvr"/> </m:mathPr></w:WordDocument></xml><![endif]-->The axioms lay the proper foundation for a philosophy.<span style="mso-spacerun: yes;">  </span>But for any statement or expression, there is almost always someone who disagrees.<span style="mso-spacerun: yes;">  </span>Axioms are of no exceptions.<span style="mso-spacerun: yes;">  </span>Of the people who are dismissive of Objectivism, I believe many are especially opposed to the Objectivist axioms. <br><div class="MsoNormal"><br></div><div class="MsoNormal">Since I covered the metaphysical axioms of Objectivism in this series of posts, I’ll take the time to answer a series of actual objections to the axioms of the philosophy, and one objection to the idea of axioms as unprovable, originally answered by Aristotle.</div><div class="MsoNormal"><br></div></div><a href="http://inductivequest.blogspot.com/2015/07/objections-to-axioms-part-1.html#more">Continue...</a>

    Link to Original
  3. Like
    Roderick Fitts got a reaction from ttime in Induction and Reduction of “Values as Objective”   
    The point of this essay is to induce and reduce the principle that “values are objective,” and we’re going to use Ayn Rand’s own life to reach this, since it was her identifications that led to the objective theory of values in the first place.

    Here are two deductive (but not rationalistic) approaches to demonstrating that values are objective:

    (1) Value requires a valuer […] [Moral evaluation] is possible only if man chooses to pursue a certain goal, which then serves as his standard of value. The good, accordingly, is not good in itself. Objects and actions are good to man and for the sake of reaching a specific goal.

    But if values are not intrinsic attributes, neither are they arbitrary decrees. The realm of facts is what creates the need to choose a certain goal. This need arises because man lives in reality, because […] the requirements of his survival, which he does not know or obey automatically, are set by reality (including his own nature). [Man’s evaluations] do not have their source in anyone’s baseless feelings; they are discovered by a process of rational cognition
    [...]
    Moral value does not pertain to reality alone or to consciousness alone. […] The good, accordingly, is neither intrinsic nor subjective, but objective.
    […]
    [T]he good is an aspect of reality in relation to man. That is: the good designates facts—the requirements of survival—as identified conceptually, and then evaluated by human consciousness in accordance with a rational standard of value (life).”
    [Peikoff, “Objectivism: the Philosophy of Ayn Rand,” p. 241-43.]

    (2) The intrinsic theory holds that the good resides in some sort of reality, independent of man’s consciousness; the subjectivist theory holds that the good resides in man’s consciousness, independent of reality.

    The objective theory holds that the good is neither an attribute of “things in themselves” nor of man’s emotional states, but an evaluation of the facts of reality by man’s consciousness according to a rational standard of value. (Rational, in this context, means: derived from the facts of reality and validated by a process of reason.) The objective theory holds that the good is an aspect of reality in relation to man—and that it must be discovered, not invented, by man. Fundamental to an objective theory of values is the question: Of value to whom and for what? An objective theory does not permit context-dropping or “concept-stealing”; it does not permit the separation of “value” from “purpose,” of the good from beneficiaries, and of man’s actions from reason.

    What we want to answer is: how did Ayn Rand reach the theory of objective values from her own experiences in reality?

    We should quickly realize that it wasn’t as if Rand had given no thoughts as to what values are before reaching her theories of concepts and of objectivity. If that were the case, not only would she have not lived very long, but she wouldn’t have had any values to apply her theory to. What must have happened is that she had reached an understanding about value’s status, source and validity well before forming her theory of concept-formation or of objectivity, and these later theories allowed her to reach her final theory of values and other identifications (like the connections between objective values, capitalism, and force).

    Rand had many explicit values, and she formed an idea of values at an early age. The fictional hero Cyrus Paltons from Maurice Champagne’s The Mysterious Valley, the works of Victor Hugo, skyscrapers she saw, American movies, and tiddlywink music. She also had intense disvalues: the small talk of the Russians of her youth, communism and its effects on her life, folksy, average protagonists, etc. Having intense values is the precondition to any further advancement in regard to values. If you don’t become passionate about your values at an early age, then the ability and motivation to understand their role in your life will never arise, or it will be very difficult to appreciate.

    So, what inductions did Rand have to make about values from considering her own values?

    By reducing the concepts “values” and “objectivity,” we can reach two inductions: The role of choice in values, and the role of reason and reality in values.

    Human Values Involve Choice and Reason

    Something that Rand knew as a kid was that her values were not automatic, and not self-evident. Many people did not have her values, and many did not agree with them. What Rand gleamed from this is that values are, in some way, personal to the person valuing; they require the decision of or input from the person. Values aren’t thrust upon people by reality or their particular situations. Some view or decision or input from the person who values is needed.

    Another idea that Rand learned was that her values were not on the same level as that of others. She came to disagree with the prevalent idea that values are arbitrary, mere opinions, and that no one’s values are better than anyone else’s. She could give reasons for why she valued things, whereas she would note that other people who disagreed with her couldn’t provide any reasons for their values. Later on, she would induce that all ideas have to be reached by reason, and that this has a relation to the role of values.

    In some sense, Rand grasped that values involve her choice and the functioning of her reason. She learned that values have some relationship to the facts and don’t pertain to just your wishes, that you have to understand these facts with your mind in order for your choices to be rational. On the one hand, she learned that values involved her knowledge and her thoughts, whereas on the other hand, those who disagreed with her would preach blind obedience to holy commandments or to some authority. She knew in an introspective way that if she didn’t understand the reasons for something, then she would openly oppose the view that it was a value to her because some authority said so. She knew that her values had to be reasonable, and that is why she would choose them. So when arbitrary commands were issued to her as duties, like “don’t read so much, be more social, stop being so intellectual and intense,” she would despise them and disobey them. Her view of reason and values, combined with the non-value of other people’s commandments towards her resulted in a generalization that she knew very well from her own experiences, that “nothing is valuable until or unless it passes the test of my own reason.”

    From an early age, Rand knew that both choice and reason-recognizing-facts are involved in values properly.

    “God Said: ‘Take What You Want and Pay for It.’”

    With the knowledge that values involve both human choice and reason grasping facts, she could successfully deal with opposing views in philosophy that she would encounter in high school and in college. The “Duty” school of the Kantians and Christians: they are very similar to the people in her neighborhood who would tell her to do something simply because she “has” to, that it’s her “duty” as a girl or a child, etc. The school of subjectivists-skeptics whose view was that nothing is certain, so anything goes or is equal to anything else: she believed that some values were better than others; some values are based on reason and facts, and some were not. Values for her were not an issue of “do whatever you want,” and not an issue of uncritical obedience to someone’s edicts or commands.

    Eventually, the question arose, “how do I reconcile these two, values involving choice, and values involving reality?” The history of philosophy basically split on this question, taking one side or the other. If values are based on choices, then values are subjective, it’s essentially up to you to decide them, and reality has no say in the matter. If values are based on reality, then it’s like the law of gravity, and you have no choice in the matter, you just have to obediently accept the values that reality hands down, and that’s the intrinsic school. She learned something about both choice and reality that allowed her to combine the two while not getting trapped in one side or the other.

    What she learned that allowed her to advance in this issue was a form of causality, represented by her favorite Spanish proverb: “God said: Take what you want and pay for it.” In her interpretation, this means that you choose a goal, an object that you want (the role of choice here), and reality sets the course to reach the goal, and the consequences of the achievement that result. Reality sets the cause-and-effect, the means required and the consequence, and one’s choice sets the ultimate purpose for acting. At this stage of her thinking, a choice was rational when you knew your reasons for doing it, when you knew the means required to reach your goal, and when you knew and accepted the consequences that would result.

    (Later on, she would connect this thinking with Aristotle’s doctrine of final causation:

    In order to make the choices required to achieve his goals, a man needs the constant, automatized awareness of the principle which the anti-concept “duty” has all but obliterated in his mind: the principle of causality—specifically, of Aristotelian final causation (which, in fact, applies only to a conscious being), i.e., the process by which an end determines the means, i.e., the process of choosing a goal and taking the actions necessary to achieve it. (http://aynrandlexicon.com/lexicon/final_causation.html ))

    In some sense, she knew that this didn’t completely answer the status of value, because it left open the question, “What is the status of the basic goal or decision or choice?”

    Could the Spanish proverb mean:

    “Choose whatever you want as the goal of your life, and then follow reality in reaching it?” If it did mean that, it would be a complete surrender to subjectivism. Joseph Stalin could say, “I pick destroying millions of people,” and his regime carried that out in reality, he took whatever steps were necessary to accomplish it. He also accepted the consequences: if socialist revolutionaries, conspirators, freedom fighters, etc. tried to kill him, he ordered thousands of his guards to protect him while living in his country house, and had food tasters at every meal to ensure that he didn’t die of some hidden poison in his food or drinks. So if the test of the proverb was, “consistency with the goal you choose, no matter what it is,” then Stalin would have passed it. And that’s why the proverb alone can’t be the basis for values in reality: it gives far too much emphasis on the “choice” component of values as a primary, and not on the fact that reason or reality should be guiding your choice, even in the case of a basic choice.

    “Life” Makes “Value” Possible and Necessary

    If we reduce “values are objective,” we’ll reach something that points back at that question I posited, about the status of the basic choice or goal.

    “Values are a means to an end.” This is something established by the reality of cause-and-effect, and by simply introspecting on one’s values. Rand probably reached this induction by thinking about what goals she wanted to accomplish by reading Victor Hugo’s works, or watching American movies, or listening to Tiddlywink music, as they were means to an end of hers in one way or another.

    The problem, which she didn’t solve until her 40’s, was “what is the ultimate goal that will serve as a standard of value?” and further, “how do we relate it to reality?” Discovering the ultimate goal would allow us to tie all values to reality, and if it turns out that it is an issue that we have a choice about, then everything can be integrated together because it will be choice and reality together with values in some way.

    From there on out, she would be on a course to find an ultimate goal and standard of value that we have a choice to adopt or not, but was required by reality in that it still had to be discovered. The results of her search can be read in Galt’s speech and in “The Objectivist Ethics.” Through a series of identifications, she realized that living things have needs, and they have goals that require actions on their part to satisfy these needs—living things are goal-directed, and face an alternative of life or death, existence or non-existence. Inanimate objects don’t require anything to remain the way they are (they only need to be left alone), and nothing matters to them, even if they are reduced to ashes (or subatomic particles)—they have no needs, and so they merely react with no negative or positive consequences for them. At some point, she connected this train of thought to her earlier identification that “values are that which one acts to gain and/or keep.” Combining these points, she discovered that values are what living things act to gain and/or keep, ultimately to remain alive (through fulfilling whatever subordinate end the value exists for). Life, she reasoned, is a series of actions generated by the living thing itself, designed to sustain the thing’s existence, and the means of sustaining itself is successfully satisfying its needs through value-achievement.

    (Historically, she says that she didn’t fully understand how “value” depends on “life” until after “The Fountainhead,” so she likely reached her mature, philosophical argument for her ethical views while writing notes for “Atlas Shrugged.” For instance, she said that while writing “The Fountainhead,” she didn’t realize that even weeds have values. See “100 Voices: An Oral History of Ayn Rand,” p. 335)

    These identifications allowed her to change her understanding of the concepts of “life” and “value,” such that one would depend on the other. She notes that our automatic pleasure-pain mechanism indicates what is the right or wrong action in a given context, and that the standard being used is the conscious organism’s own life; she also states that our emotional mechanism has two basic emotions, joy and suffering, and that these operate under a standard of value that is chosen by us, determined by how we choose to live. Both pleasure and joy indicate that a value is being achieved, and both are indications that the organism in question is furthering his life (and the opposite for pain and suffering). From observations and reasoning such as these (and many more in addition), she came to the pivotal conclusions that both the reality and very concept of “value” is made possible and necessary by the reality and concept of “life.”

    “Life” (that is, the existence of living things and the concept of “life”) makes “value” possible because values require the accomplishment of a goal in the face of an alternative, such that the action’s success or failure makes a difference to the thing that acts; inanimate objects have nothing at stake, and matter merely changes its form, never ceasing to exist—when living things exist, so do values. “Life” makes “value” necessary because living things do face an alternative of life or death, and can lose their lives if they fail to achieve their values—it is impossible for a life to continue without accomplishing the values that its nature requires.

    In one grand-scale integration, she came to three important conclusions about life which would allow her to solve her problem about what the “standard of value” could be: she determined that life is the ultimate end or goal, life is an end-in-itself, and life is the ultimate value. Life isn’t a means to anything else except continued living, continued existence, so fulfilling life only results in more life that requires action to sustain, and this is why it’s an “end-in-itself,” not a means to any other end. It’s the ultimate end or goal, because all of the other goals are means to the end of keeping the living thing within the realm of reality, to keep it alive, just like one sleeps, eats, or drinks to remain alive. And it is the ultimate value because it is for the sake of life that actions are taken to achieve other values. With all of these inductions clear to her, she reached her seminal induction: “life is the standard of value.”

    Life is the Standard of Value

    A standard is an abstract reference point or principle that we use to measure or gauge things in order to guide us in carrying out a specific purpose. By taking “life” as the standard of value, we can observe the effects of our purported values on life, whether positive or negative, and thus determine whether it is a genuine value or not. This is the way in which Rand held that life, the ultimate value and end-in-itself, could set the standard by which all lesser goals could be evaluated. Whatever furthered the life of an organism is the good, and whatever threatened its life is the evil.

    Once she reached the principle that “life is the standard,” she began the process of analyzing all of her accepted values, showing that they were all reducible to “life is the standard.” Reason, virtue, production, sex, happiness, art, self-esteem, purpose, morality, individual rights, etc., are all examined under this new principle of hers, and the principle became central to the philosophy of Objectivism as a result. Not only could she tie all of her values to reality with this overarching principle (in addition to the specific reasons she had before for holding those things as values), but she could also integrate this principle with her view that human values involve choice, too: “life being the standard” was something that a person had to choose. If a person didn’t choose life, then Rand was now in the position to show the person that the whole issue of what is good or bad for them became philosophically unintelligible without choosing life.

    A proper value, Rand now believed, means a goal that was chosen in accordance with reality by comparison to an ultimate goal and standard of value, which is life based on reality.

    Values are Objective

    She was ready to advance another stage higher than even all of these previous integrations once she fully developed her theory of concept-formation and her reformulation of objectivity. Once her knowledge of objectivity grew, she only needed to integrate the process of forming concepts with the process of forming values. Both concepts and human values involve the awareness of something in reality in addition to something contributed by human consciousness—in the case of concepts, the contribution is measurement-omission; in the case of values, it is the choice to live. Rand could then say that values are objective because they are formed by a definite method, but not by some authority claiming that something in reality is an intrinsic value, and not by subjective, arbitrary feelings. This method involves two factors, just like in the case of concepts: existence and consciousness. Values are objective because the good is an aspect of reality in relation to human beings, just as our concepts are, and this means that logic can be used to evaluate what we claim to be our values using the standard of life. Rand reached the theory of objective values in 1965-66, soon after realizing the significance of her expansion of the concept of objectivity, and so she reached this idea at around the age of 60, and it was a theory that she pretty much worked her whole life to formulate.

    Meta-blog, automatic cross-post
  4. Like
    Roderick Fitts got a reaction from Maken in Induction of "The Arbitrary as Neither True Nor False&quo   
    The aim of this essay is to induce the Objectivist principle that arbitrary claims are neither true nor false, but are in a third class: non-cognitive. Ayn Rand said in regard to arbitrary assertions that, “it is as if nothing had been said, because nothing of cognitive value or validity has been said.”

    The outline of this essay consists of three inductions and two clarifications:

    (1) The arbitrary has no connection to evidence or anyone’s cognitive context.

    (2) The arbitrary is detached from thought and even the possibility of thought.

    (3) Arbitrary claims are neither true nor false.

    (4) How do we respond to arbitrary claims? And,

    (5) What do we do in the face of an arbitrary claim that has evidence in its favor?

    The Arbitrary is Disconnected from All Evidence and Cognitive Context

    How do we reach the idea of the arbitrary in the first place?

    People make many claims on a daily basis. What we need to keep in mind is an idea that anyone in our modern times would know: some ideas can be proved; they have a basis in fact. This is the context we need to understand certain things about arbitrary claims, this idea that some ideas are validated, proved, have a basis.

    Here’s some examples of contrasting “ideas with a basis” from arbitrary claims.

    1. “You are reading an essay right now.” The basis for believing this is perception.
    2. “There’s a gremlin in the room.” When no one sees it, and asks what is the basis for the claim, the asserter says, “I simply believe it; he’s unknowable with your limited senses, and can’t be interacted with your meager abilities, but I can perceive and deal with this gremlin. Prove that he isn’t there.” The asserter claims whatever he wants, and when a basis is asked for, he essentially replies, “because I say so.”

    (1) “That man’s violent behavior is caused by abnormally high testosterone levels.” The basis? The results of medical diagnostics and tests, proof that the man’s genetically predisposed to high testosterone levels, the fact that the man had no reasons for becoming angry and violent, and reports by others that he is generally peaceful, and the scientific connection between high testosterone and aggressiveness in men.
    (2) “This man’s violence is due to him being possessed by a vile demon.” What’s the basis? Well, there are reports in the Bible about demonic possession, it’s a field of study that people are currently researching, and there have been other reports of demonic possessions and exorcisms in the city that this man resides in.

    1) “I’m compatible with my wife.” What’s the basis? The person says, “Our careers are in the same field, we have many of the same hobbies, and we’re attracted to each other. I love her, and she loves me.”
    2) “I’m compatible with my husband.” The basis? “Well, we have compatible zodiac signs, and the descriptions of people who are born under those signs match us perfectly.”

    So, from examples like this, how do we reach a definition of the arbitrary? We have examples that have something to say, and statements that don’t really have anything to say. In other words: when does a series or progression of words become a basis for something else, and when isn’t this the case? The inductive question is: what is not present in every instance of the arbitrary that allows us to say that they have “no basis”?

    There are no observations in the assertion of the arbitrary. There’s no logical argument, because arguments for the arbitrary are fallacious in one way or another, whether circular reasoning (“I say so, because I say so”), a non sequitur, based on no reasoning whatsoever, etc. No integration with past knowledge or a person’s cognitive context.

    We have a word that ties all of these observations together. And it is “evidence,” specifically “probative evidence.” Probative evidence means: “an item of knowledge tending to establish or prove an idea.” When we say that a claim has “no basis,” we mean: no evidence. No perceptual evidence, because there’s no observations. No conceptual evidence, because there are no logical processes of deduction or induction used. (We’re indebted to Aristotle for first remarking that these are the two means of reasoning.) The conclusion about this that we have to reach is that there’s no relationship between the claim and any cognitive evidence, whether you consider what observations and facts are in its favor, what past knowledge the person may possess that is relevant, or what argument the person can adduce to support his claim.

    (This isn’t a linguistic issue: it doesn’t matter what the asserter of the arbitrary claims, but the evidence for what he says, and whether it is available or not. The arbitrary is something that is detached from any rational, available evidence.)

    Saying that a claim is arbitrary is to mean that it transcends the current context and available evidence, dismissing them as irrelevant. It also transcends future evidence, evidence that a person would have to search for and that isn’t immediately in one’s face (like a police detective). If the claim was related to our context, our knowledge and definitions for terms, then we could check the evidence and come to a logical decision about whether it is true or false, but that is not what the arbitrary does.

    To understand this point about the arbitrary and evidence more clearly, we can integrate into the discussion what we’ve learned about objectivity from Aristotle and Ayn Rand. We found out that Aristotle taught the method of using observations and logic, and that these were necessary to reach the idea of a “proved statement,” which we needed to reach our idea of a baseless statement. We learned from Ayn Rand the importance of context and integration of all of our knowledge. So we have a wealth of complicated information pertaining to gaining valid knowledge that we can now apply to the arbitrary.

    The result is that we learn that the arbitrary goes against everything we know about gaining knowledge. No observation; no logic; no evidence; no context; no integration. It isn’t any form of cognition, but is anti-cognition. Arbitrary claims being anti-cognition is a deductive conclusion from what we know about the arbitrary and the nature of knowledge-acquisition.

    The Arbitrary is Detached from All Thought and the Possibility of Thought

    What is another thing that all arbitrary claims have in common?

    Is there anything that we can do cognitively with instances of the arbitrary? Can we reason one way or another about them? Can we prove or disprove them? Assign some degree of probability to its truth or falsehood? Can we even hypothesize any of them? When you examine the cases, you’ll realize that it is impossible to perform any form of cognition in regard to the arbitrary.

    Will you try to disprove Astrology? Or demonic possession? What about the alleged Zionist conspiracy that intends to install a Jewish New World Order? If you make the attempt, its advocates will say, “well, I’m not the only one who believes this; a large percentage of the world believes this, there have been many reports, etc.” You literally cannot refute something that has as one of its characteristics that it can’t be considered in relation to anything that you know.

    This means that you can’t prove the claim (like those of Astrology), either—it has no relation to the facts. You can’t reason about it—it has no relation to evidence, no observations or premises. (People may say things, but there’s no evidence backing it.) You can’t even hypothesize the arbitrary, because even hypotheses have at least some basis in evidence, some facts, but the arbitrary, by its nature, has nothing in its favor whatsoever. It is literally impossible to think about such claims. A rational mind stops in its tracks when it comes to processing the arbitrary, because it can’t be done. The mind becomes functionally paralyzed if it attempts to process it; since the mind can’t move anywhere cognitively with the arbitrary, the mind will just sit there until it changes its attention to a reality-oriented subject.

    The Arbitrary is Neither True Nor False

    The first induction was that the arbitrary is detached from any evidence or cognitive context.

    The second induction was that all arbitrary claims are detached from thought and the very possibility of thought, that thought is impossible in regard to the arbitrary.

    The next step is to combine the two: If you only reach the second induction, then you won’t be able to reach the necessary conclusion about the arbitrary being anti-cognition. It’s improper to dismiss something just because you can’t think of it; it may be a highly abstract theory in a field of science that you haven’t studied at all, or a very complicated technique for fighting a war and you might be a novice as a military strategist. There are things that can be thought about, that aren’t arbitrary, but will nonetheless paralyze the mind of someone who isn’t familiar with the subject, like explaining advanced mathematics to a kindergartener. The advocate of the arbitrary could say the same thing: “you didn’t take classes in exorcism, or follow the literature on Zionism, or take Astrology classes, so that’s why you can’t think of these things.” It’s when you connect this second induction with the first, that the arbitrary is inherently detached from evidence means that it is impossible to think about an arbitrary subject because it has no relation to human cognition, that no one can think about it, that you realize how important your mental paralysis is in this case. With these two insights together, you’ll arrive at a very important epistemological fact, and a formal, deductive conclusion:

    A claim that is inherently detached from thought cannot have a relationship to reality. If it transcends the domain of cognition and our ability to think about it, we cannot legitimately claim that it has a basis in reality or that it opposes reality: we have no idea what relationship it has to reality. We can’t connect the arbitrary to any facts, whether in correspondence to reality or in contradiction to it. The same point applies to “possibility”: no one can figure out whether the arbitrary subject is possible in reality or impossible, because there is inherently no basis in evidence for any statements about it in relationship to reality; it is beyond human processing.

    Here is where the concepts of “true” and “false” come in. Everyone in the civilized world learns these concepts very early—true means that something accords or corresponds with the facts, and the false is something that contradicts or goes against the facts. When we say that a statement “corresponds,” we merely mean that we recognize its relation to reality, not that the statement itself does anything. The methodology in this designation of “true” and “false” is that you establish or point out some positive relationship between your claim and reality, and then call that “true,” and the opposite happens in regard to the “false.” Both truth and falsehood have some sort of relationship to the reality, but the arbitrary does not.

    This is the reason why the arbitrary is neither “true” nor “false.” As far as human cognition goes, it is like a parrot making a memorized noise, or someone suffering a mild stroke or someone who’s high as a kite. Noise has no cognitive status, it isn’t “true” or “false.” In a sense, then, when someone says that your claim is false, it is a complement because they mean that it contradicts reality at one point: they don’t mean that your claim is a complete break from reality and is totally disconnected from it. By the same token, if someone says that your claim is arbitrary, then it’s a much greater insult if that isn’t the case.

    What Should I do about Arbitrary Claims?

    We’ve reached the idea that arbitrary can’t be processed by a human mind, so what should we do when confronted by an arbitrary claim that we know to be such? We should not talk about it, and dismiss it without any thought.

    When it comes to arbitrary claims, like the belief in the afterlife, reincarnation, Gods, etc., you shouldn’t give in to despair and think: “I can’t unravel this claim, there’s too much complexity or information.” That could be legitimate in a difficult court case, or assessing competing scientific theories, where the evidence can become very complicated, but not with the arbitrary. In the case of the arbitrary, you have to decide and make a definite stand: since there’s no evidence either way, the claim is unthinkable, inadmissible, and can’t be discussed.

    Though you cast out the arbitrary in your mind, a negative aspect of the arbitrary influences your decisions when it comes to actions. You act, in regard to the arbitrary, as if nothing had come up, as if it wasn’t there. If someone simply asserts with no evidence that your food was poisoned, or that your house is going to collapse, and you’ve taken all the normal precautions and don’t observe anything out of the ordinary yourself, you don’t analyze that person’s claim: you eat the food, or enter your house, etc. If there’s no available evidence, then “there is no poison,” or “my house won’t collapse” must be the conclusion you reach. In a sense, it’s reality that makes that conclusion necessary for you to reach, because there could conceivably be billions of things that aren’t in your food presently, or billions of things that aren’t wrong with your house’s construction at the moment, so you can’t spend your days checking out every conceivable thing that could be put into your food, or that could destroy your house. You only check for poison when there’s evidence for poison or check for your house being on the verge of collapse when there’s evidence of that: if there isn’t, then you ignore the person’s claim totally.

    How to Handle “Arbitrary Claims with Possible Evidence”

    Now, here’s a “hard case”:

    (This example is based on Dr. Peikoff’s example in the “Objectivism Through Induction” course, in which someone asserts that “Harry Binswanger gave a 3-hour seminar in his New York apartment on Hegel’s logic for his bachelor party, and it started at 4:00am EST with 50 Objectivists present to hear it.”)

    Someone, let’s call him “Tim,” says that your friend has been lying about his anti-Astrology beliefs for years—he advocates Astrology and takes classes on Astrology every Thursday afternoon at 5:00pm with 20 other students at a well-known Astrologer’s private residence.

    You ask: “What’s your basis for that?”

    Tim says: “Because I say so. That’s my view.”

    That statement from Tim is completely, absolutely arbitrary, but there seems to be abundant evidence available, enough to be probably decisive one way or another, if you take the time and effort to find it. You could ask your friend, his girlfriend, or ask all of the students and the Astrology teacher; you could track your friend on Thursday and see if he goes to the Astrologer’s house some time before 5:00pm, or check your friend’s house for any Astrology books or other signs of interest in Astrology.

    The question that must be asked is: what can we do about this?

    Can we pronounce it true? No, we haven’t checked it out, yet.

    Can we say that it is false? No, we haven’t assessed it, or tried to refute it. There seems to be abundant evidence available, so if it is false, we should have the means to refute it. Well, the question now becomes: should we expend the time and energy to find the evidence to decide one way or another?

    What’s the reasonable action in this case?

    All he said to back it up was “I say so,” but there appears to be plenty of evidence to decide one way or another. What should a rational person do? Should we examine an arbitrary claim if it is actually possible to do so?

    Dr. Peikoff said that his answer to this question changed. Historically, he always answered “no, we shouldn’t examine such a claim.” Peikoff argued that the person making an arbitrary statement is holding a direct contradiction. By saying that “maybe X happened,” he’s hypothesizing the existence of something without any evidence, that’s there is evidence for the existence of X, but with no evidence in support of that judgment. Since there’s no justification for the claim, it is non-cognitive, and therefore there’s no justification at all for checking out the claim. It’s irrational and immoral to check it out, but it is possible.

    But since then, his answer has changed in an important way. Since I thought it would be helpful to the readers of this essay, I’ll present my own version of his new, “Objectivism Through Induction”-inspired answer.

    The answer I would give to that question is: How did we even get into this kind of dilemma at all? We’ve already induced that the arbitrary is devoid of evidence and produces mental paralysis. Given that, how can there be an arbitrary claim that can be checked out by reference to perceptual and conceptual evidence? We’re in this predicament because we’re confronting a solid contradiction.

    There is no such thing as an arbitrary claim that has possible evidence. There is no way to check out if your friend believes in Astrology or attends Astrology classes; that claim of Tim’s can never be verified or refuted, except if you give up reason.

    The trap is set by slipping a contradiction into your mind by setting up a concrete, particular fact in your mind without reference to any principles. The scheme is set up so that you effectively must become concrete-bound, which leaves the asserter of the arbitrary, like Tim, free to be completely unprincipled, because the unsuspecting rational person will be so busy studying the concretes. Here is a description of the “concrete-bound mentality”: “This is the man, who, as far as possible to a conceptual being, establishes no connections among his mental contents. To him, every issue is simply a new concrete, unrelated to what came before, to abstract principles, or to any context…” [Leonard Peikoff, “Objectivism: the Philosophy of Ayn Rand,” p. 127.]

    Let’s keep in mind something that we learned in the induction of egoism: when we discussed altruism, I warned not to let an opponent limit, circumscribe or constrain his theory. You shouldn’t let him say: “a dime for the poor, the rich dutifully give for the sake of the world’s poor, etc.” If your goal is to assess altruism as a way of life and as an inductively reached universal principle of ethics, and afterwards apply it to a wide range, then you should take altruism in the same manner as we did with egoism.

    Bob the altruist says: “Mr. Alms will be at your front door shortly. He just needs 10% of your silverware: hand that over to him as your duty, but don’t give him your bed, chairs, fans, etc. you have some rights and protections, too. A little sacrifice is good, but complete sacrifice is absurd: we would all die. So there are times when some selfishness is good.”

    If an altruist said all of that, it’ll be the greatest subversion that he’ll be capable of achieving. If you agree with him, then he’ll sabotage your mind by completing the banishment of principles. The only issue then becomes concrete-bound: “who gets the silverware?”

    But what if you were to respond to him at an abstract, principled level: how can you reconcile these opposing ethical approaches?

    Bob’s answer simply brushes your view off: “This isn’t a matter of theory, it is ethics, the field of practical decision-making. Don’t you feel that the silverware is enough for Mr. Alms? I’ve already discussed this with everyone else in the community, and they feel that it is enough—what do you feel? (The principled person sees that his answer reduces to ethics being a matter of obeying the intuitions or feelings of others.)

    The frosting on the altruist’s cake happens when Bob is able to convince you of this: “Mr. Alms will be grateful for your alms-giving, so you will get his good will and support out of performing this duty. See? There’s no real contradiction between the views of selfishness and sacrifice. You can have both, and by sacrificing just a little bit, you can become more selfish.”

    Now, to apply this kind of example to our own case of the arbitrary statement with “evidence”:

    The advocate of the arbitrary says: “Mr. Conspiracy will visit you. Just accept the first arbitrary claim that he makes. After all, you’re a reasonable man—some arbitrary claims make life interesting and open possibilities that we wouldn’t have thought of otherwise. Too much, of course, is ridiculous, as we’d never get anywhere. Sometimes, we need real evidence, too.”

    No attempt is made to reconcile or make compatible in any way these two completely opposite principles of epistemology. One arbitrary claim, he tells us, and then reason all the way from then on. You’ll become confounded if you don’t answer, “I refuse to say anything about this, I’m leaving this discussion.”

    Let’s suppose that you do try to prove that your friend doesn’t believe in Astrology and doesn’t attend classes on it: 23 affidavits, from your friend, his girlfriend, the Astrology professor, and his students, all together prove that he doesn’t attend classes on, or believe in, Astrology.

    “Aha,” Tim gleefully exclaims, “Roderick was wrong! I suppose that the arbitrary isn’t so “arbitrary,” in the end. Perhaps the arbitrary can be rationally proved, as well!”

    I then lose my cool and say, “Your fact-gathering and 23 affidavits don’t prove anything! If they were supposed to prove that arbitrary claim about your friend, then I might as well say that your 23 affidavits are a part of a cunning conspiracy by a group of liars—prove that it isn’t. This is what I mean when I say that it’s impossible to refute the arbitrary. You can’t do anything cognitively with the arbitrary.”

    Tim, the advocate of the arbitrary, would then remark: “There’s no need to be such an extremist about this: everything cannot be arbitrary. So let us all decide this issue communally (intuitively): 23 affidavits sounds like pretty convincing evidence to me. If we allow Roderick’s objection, then we’ll be overwhelmed with the arbitrary. It will be far too much, just like if Mr. Alms were to ask for your bed, too; we need to draw the line somewhere.”

    Now, Tim’s acting like he’s the defender and spokesman for balance, moderation, and level-headedness.

    Tim finally says: “who here feels with me that, leaving aside all technicalities, 23 affidavits proves something? Do we have a court system with valid affidavits, or are we going to live in some fantasy world like Roderick here?

    Then the debate turns into: just accept the first arbitrary statement, or the 20th, or only once a week, or only in our constitutional laws, or only in Astronomy.

    So, here’s the layout of the whole racket: If the arbitrary were fully advocated as a principle, the advocate of such a view would look ridiculous, and that viewpoint destroys all cognition. By the same logic, if a person advocated altruism as a principle, the person would look medieval and sociopathic, and the viewpoint destroys our capacity to survive. So what certain people do is they covertly slip in irrational principles only now and then, as a guise; they don’t want to be perceived as extremists, so they only ask that people follow their principles a little bit. They conceal their principle and make it appear to be its opposite, and then go on to proclaim that they are the real defenders and champions of reason, proof, selfishness, human decency, justice, etc.

    What, then, causes the belief that you can prove an arbitrary claim? The failure to think in principles. The failure to insist upon inductive generalizations when confronting an issue. The willingness to discuss or allow anything into your thoughts that is disconnected from any principles that you know. The cause is the unprincipled acceptance of some other person’s irrationality, even if for a few moments, and even if it was hypothetically, just to entertain his argument.

    The conclusion of this case is: there is no case at all in which you can objectively process an arbitrary claim, not if you uphold reason as a principle and as an absolute.

    Meta-blog, automatic cross-post
  5. Like
    Roderick Fitts got a reaction from dream_weaver in Reduction of Objectivity (Ayn Rand)   
    Now that we’ve reduced and induced Aristotle’s idea of “objectivity,” we can start the reduction of Rand’s concept of “objectivity,” which is an important advancement over his idea.

    Let’s start with Ayn Rand’s definition, though presented in Leonard Peikoff’s words: “volitional adherence to reality by following certain rules of method, a method based on facts and appropriate to man’s form of cognition.”

    The “rules of method” is Aristotelian logic, but there are important epistemological discoveries within Rand’s version of objectivity that we need to focus on. Aristotle wouldn’t have focused on man’s form of cognition as something worth analyzing in order to understand how we reach knowledge.

    Whereas, for Ayn Rand, it wasn’t enough that our method is based on facts; our consciousness offers something in the acquisition of knowledge, concepts are partly human, and as a consequence, objectivity has to take this element into account. So, to reduce the idea of “a method based on facts and based on human consciousness,” we need to understand Rand’s theory of concept-formation, specifically why it is that concepts require both reality and human consciousness.

    There’s some kind of element involved in forming concepts, and recognizing this element will allow us to learn something that is inherent in all concepts, to then form Rand’s theory of concept-formation, and after that we can amend Aristotle’s view of objectivity.

    The next step down is: how did Rand reach her theory of concept-formation? What observations did she need to reach it?

    There four elements of consciousness that we need to know before reaching her theory of concept-formation:

    1. We need to know beforehand that consciousness has a specific identity, the principle that identity is the means to knowing reality, not the impediment.
    2. The identity of concepts includes the fact that it does something with measurements, and this is the means by which concepts can surpass and rise above percepts.
    3. An understanding of cognitive integration is necessary before we notice that aspect of the identity of concepts; we need some general awareness that integration plays a crucial role in gaining knowledge.
    4. Of course, before we can put things into a sum, integrate them, we must be able to take things apart, go through a certain sequence, a series of steps. This leads to our earliest understanding that knowledge inherently has a certain kind of sequence—concept-formation involves a process of forming one concept, and then forming another based on the earlier one, etc. To understand integration, we need to reach the idea that there’s an order to knowledge.

    And this is where we’ve reached the end of the reduction, since below “an order to knowledge” are specific items of knowledge that we later relate as being in a certain sequence or pattern, and these are available to introspection.

    Meta-blog, automatic cross-post
  6. Like
    Roderick Fitts got a reaction from dream_weaver in Induction of Objectivity (Aristotle)   
    Objectivity now being reduced, we can work through the steps Aristotle had to in order to induce his principle of objectivity. It’s essentially five steps:

    Grasp the distinction of percepts and concepts. Understand that concepts are capable of error, whereas percepts are not. Learn that the functioning of concepts is under our control, whereas percepts are not. Discover that we can somehow use percepts as a means to measure concepts. We’ll then know that a method is necessary, and that it is possible because we know what it would consist of, by reducing the fallible part to the infallible part. Percepts and Concepts

    The first step is to reach the distinction between percepts and concepts, what the Greeks called “sense” and “idea.” The distinction was originated by Socrates and Plato, depending on how one interprets his dialogues. What Plato had to do, and what Aristotle and all of us had to do, was to mentally observe similar instances of ideas in contrast to sensory experience, to our percepts. With the contrast, Plato was able to draw out a list of attributes that belonged to ideas as opposed to sense experience:

    Ideas were general or universal (Beauty, Justice, Virtue, etc.); sense experience was particular or concrete (the beauty of a maiden, the piety of a man, etc.). The One and the Many—we’re aware of countless things which nevertheless seem to have the same properties; for instance, John is the same person, no matter what age he is or any differences in his appearances. Plato realized that this physical distinction actually applied to these mental phenomena, ideas. Ideas are abstract, non-material, whereas the senses interact with our bodies and material objects. Ideas are immutable, changeless, whereas sensory objects are always changing, coming into being and going out of existence. And so on. This first step itself consists of a great many inductions Plato had to make before even reaching this distinction. He had to realize that the phenomena of ideas were universal to man, but not to animals, which lead to the induction that animals possess senses, but not ideas. All men possess ideas—another induction. He had induce that all ideas have the same composite attributes—that anything that has universality would be immutable, non-material, etc.

    These weren’t very difficult for Plato and Aristotle to induce, as these conceptions of ideas and the senses were easily integrated with the long-known view that man was the animal that reasons, argues, judges, etc. The field of epistemology started because Plato’s discovery led to the further discovery that reason, the special faculty of humans, was the faculty of ideas or universals, in contrast to other animals who had only the faculty of sense.

    Error-free vs. Fallible

    The next step: before we get to an idea of method, we have to discover something about error. Aristotle himself made the necessary discovery, following Plato’s distinction of ideas and senses: he states explicitly and on numerous occasions that the senses (specifically the “special objects” of senses) can never be in error, but that the intellectual interpretations of sense-data can be mistaken. He says for example that the seeing of the special object of sight, i.e. color, like “white,” can never be in error, while the belief that the white object seen is a “man” may be mistaken (On the Soul, Book 3, Chapter 3). He made this clear-cut induction without a clear knowledge of how the sense-organs operate or even how we form concepts, except that it involves abstraction and induction. But from examples like the white object seen being a man, and many examples of seeing, hearing, etc. being free from error, while the thought associated with the sense experiences being liable to error, it was relatively simple for him to generalize, thus grasping the fallibility of ideas, and the infallibility of the senses.

    What We Control

    The third step: We also have to know where we are in control, and where we’re not. The Greeks discovered this, and Plato and Aristotle knew that the senses were automatic, that they are an interaction of some material object and your body’s sense-organs, and no effort is needed on your part. And Aristotle knew implicitly that concepts functioned under our free will, and that we could deliberate, guide our mental and physical actions, make choices based on our circumstances, improve our skills, like in debate, etc. He knew that no act of will could affect our senses once we our organs had interacted with the objects of sense, and that no mental effort was necessary for the interaction, and that the opposite was true for the level of ideas. (For instance, in the Topics, Book 8, Chapter 2, Aristotle advises that during a debate, when you present an inductive argument based on several cases and your opponent won’t admit the argument, you should cover all the cases with an already-known term, or a newly-coined one, which places the burden on the opponent to then disprove your argument.) Clearly, he thought that the use and creation of ideas was under our control, and he must have induced the restriction of free will to the conceptual level from observing numerous cases which discerned the role of choice.

    So, Aristotle knew that the part that can go wrong is in our control, and the part that was error-free was not in our control. With that knowledge, advances could be made in the science of epistemology, specifically an account of objectivity. The goal is to use our free will to correspond our ideas to the senses. Aristotle will propose to use the safe, error-free part as a standard against which to test the part that’s liable to error.

    The Connection between Percepts and Concepts

    This, however, leads to an interesting question: Since ideas and senses are opposites in so many ways, what could be the connection between the two? Plato is well-known for regarding ideas and the senses as so different that they occupy different realms, and thus that there are two worlds. Aristotle had to realize that there’s only one world, and that ideas come from sensory experience. He describes the process of ideas coming from sensory experience in Posterior Analytics, Book 2, Chapter 19 as a progression from perception to memory to experience (memories of the same thing) to a universal. The essence of objectivity is being able to reduce our ideas to the evidence of the senses. So Aristotle’s discovery that all ideas come from sense experience was an important and necessary induction.

    How did he reach it? He had to directly observe the process he used in forming concepts. And the essential process of concept-formation, the one which he was the first to name, he termed abstraction. By abstraction, he meant a special focus on the similarities among things, while ignoring or not specifying the magnitudes of their differences. Certain things have similarities which we can cognitively focus on and “pull out,” separate in thought what can’t be separated in reality. Once mentally separated, we could discover an implicit universal that applies to all the particulars of a certain kind, and that allows us to form the concept, definition, or proposition. He performed this analysis on many concepts, concepts that he observed introspectively, and concepts he heard from other people. This insight into the nature of abstraction led to Aristotle’s induction that all ideas are formed by a process of abstraction from the data of senses, adding that higher-level abstractions were formed from lower-level abstractions that were initially formed from sense experience (see Posterior Analytics, Book 2, Chapter 19). For Aristotle, this corrected Plato’s original thesis that ideas do not come from the senses but are recollected from our previous existence. This in turn led to the deductive conclusion that there are no ideas apart from sensory evidence, and thus to the view now known as “empiricism,” the idea that all knowledge is based on sensory experience and the denial of innate ideas, a view which originated with Plato.

    Here we could use the genus principle: knowledge above the level of a jellyfish, the level of discrete sensations, requires some sort of certification by perception, some validation. Higher animals (like tigers) already have perception, so their knowledge is directly validated. The distinctive method of people requires validation by perception because it is conceptual—concepts have to be reduced to percepts. This is what we have to know to form logic, because we now know that conceptual validation isn’t given without effort, but requires some kind of reduction to the level of percepts.

    The fourth step would be to understand what basic things people did with concepts, so that we begin to search for a method to check our ideas against the senses. The key fact here that was known way before the Greeks is that people would argue, they would have structured discussions involving chains of ideas, which would lead to other ideas or other chains. The Greeks knew that propositions called “premises,” when linked together, would lead to a proposition called a “conclusion” and that this structure was called an “argument.” (Aristotle discusses “premises” and “conclusions” in Prior Analytics, Book 1, chapters 1 and 4, for instance.) The Greeks also knew that these arguments were a kind of reasoning, and that arguments were a crucial way of using ideas to gain knowledge.

    There were many observations and inductive conclusions required before anyone could reach the ideas of “arguments,” “conclusions,” etc. and we’ll take them for granted here. People also knew in Plato’s time that you could unravel an argument, asking what a given premise depended on, which implies that there can be a chain of arguments. In this way, they learned that knowledge is relational and hierarchical: relational, because a person could gain important knowledge by relating one cognitive item to another (like, “a ‘tree’ falls under the idea of ‘plant’”) as opposed to starting in a void; hierarchical, because these relations among ideas can be organized into complex, protracted structures which go back to some kind of beginning or starting point.

    It’s this context concerning arguments and their structures, and what Aristotle figured out about concepts that were prerequisites for Aristotle creating the science of logic.

    A Method that was Both Necessary and Possible

    Before we induce what Aristotle learned about logic, we should first reduce it, which will give us a clue into what discoveries he had to make.

    Logic allows you to validate or prove an idea by showing you how to establish valid relations among your knowledge, leading back to axioms, to sense-data. How did he discover that “validation” is something established by leading an idea or chain of ideas back to axioms or sensory data? That presupposes that he discovered the principle of validity, whatever it is that makes an argument valid. Once he knew it, he would know what makes a valid argument, and could realize that a chain of valid arguments is what a proof consists of. We thus need to discover the basic principle of validity. To determine the standard or basic principle of validity, we’ll need to create a list of valid arguments on one side and compare them to a list of invalid arguments on the other side. And from there, we can abstract what the valid ones have in common.

    But before that separation can happen, we’ll have to discover that we need rules to guide us in relating ideas, since up until now we’ve been focusing on a general guide of validating concepts as such, not specifically their relations to other concepts. Aristotle’s validation of concepts will progress by analyzing concepts as combined into statements, and discovering rules to guide us whenever we combine these statements to draw a conclusion.

    So, to induce all this: how did Aristotle discover that we need rules to direct us into combining propositions to reach a conclusion? The Greeks before Aristotle knew that some arguments followed from their premises and some did not, and philosophers from the beginning of philosophy would criticize each other for drawing unwarranted conclusions, or denying what they admitted in their premise. The Sophists were well-known for deliberately using invalid arguments and convincing people to accept them. They knew that reasoning was the means by which we learn (or at least one important way), and they knew that a person’s reasoning could get off-track and the reasoning could be criticized as a result. All of this was known before logic, and people could grasp that arguments didn’t follow before studies on arguments like that of Aristotle’s, and Aristotle used this knowledge to devise the method of reasoning.

    Aristotle knew that a method was necessary because he knew that the mind’s reasoning can go wrong, that this wasn’t direct observation of the self-evident. And he knew that this method was possible because he knew that the area of reasoning was the area under our control, and that we’re not merely reactors. People argued, and they couldn’t figure out how or why arguments would fail, and yet Aristotle devised an ingenious universal method for checking chains of ideas in our consciousness. He set out to formulate a set of principles that an argument could follow and could insure that the argument was valid, and if an argument disobeyed the principles, it was then invalid. Thus, he abstracted a method of thought that pertains to everything and anything: books, teachers, ships, houses, ideas etc. The result would be the largest induction that could ever be made.

    But even the discovery that we need these rules was itself inductive: how did he know that we need rules in every case of reasoning? He knew that every case of reasoning was volitional and fallible, liable to some kind of fault. He couldn’t examine every case of reasoning; he rather examined exhaustive numbers of arguments (simply read the Prior Analytics for a sample of his study!), and generalized that all reasoning required rules. There’s no other way to reach this generalization except by induction, neither by enumerative induction and inspection of every case, nor by deduction.

    His goal was to find rules to determine valid and invalid arguments. The results of his efforts can be read in his Prior Analytics, which inductively presents each argument type, even providing examples for the reader to work through his argument structures, which uses variables. I’ll add that the idea that variables could be used to teach argument structures was another innovative induction of Aristotle’s. His amazing discovery was that all the valid argument structures were related by having certain forms, rather than validity depending on the content or material of the argument. Without Aristotle’s discovery of this, logic would have been impossible, as people would think that only certain structures would work for certain content, or that the content determined the validity of the argument.

    Here is an example of a valid argument (the syllogistic figure that the medievals called Baroco, in which “a” means a universal affirmative proposition, like “all men are mortal,” and “o” means a particular negative proposition, such as “some pigs are messy”):

    “a”: M belongs to all S “o”: M does not belong to some B “o”: S does not belong to some B To give an example of this argument:

    All swamps are murky. Some books are not murky. Therefore, some books are not swamps. The major premise (M belongs to all S) states a universal property of a subject, the minor premise (M does not belong to some states that some members of a different subject don’t have this property, and the conclusion (S does not belong to some is a conversion of the two premises: it infers that those members of the second subject do not belong to the class of the first subject. (A conversion occurs when we infer a proposition from different proposition by interchanging the subject and predicate.) Aristotle, by using variables, holds that arguments like this, and the other figures he discusses in his book, are structurally valid no matter what their content.

    Here’s an example of an invalid argument:

    C does not belong to B C does belong to some X B belongs to all X Or, to particularize this argument:

    All bathtubs are not made of cardboard. Some boxes are made of cardboard. Therefore, all boxes are bathtubs. We supposed in the major premise that no bathtubs are made of cardboard (C does not belong to , and this is convertible with the statement that no cardboard things are bathtubs (B does not belong to C). But in the minor premise, we also supposed that some boxes are made of cardboard (C does belong to some X); to make the argument valid, we would have to conclude that some boxes are not bathtubs (B does not belong to some X), but that isn’t the conclusion we reached in the argument. The conclusion that all boxes are bathtubs does not follow from its premises, and it clashes with what the premises present. (What this “clash” is will be discussed a little later.) In fact, the argument structure will always be invalid, no matter what the content is, with the result being that “there will be no syllogism,” as Aristotle often remarks about invalid arguments.

    Aristotle’s Predisposition towards Forms and Rules

    How did Aristotle find out that validity in arguments is an issue of form? Here, Dr. Leonard Peikoff has two speculations, as he finds the idea that Aristotle merely observed instances of arguments and induced his discovery to be too simplistic.

    Two factors may have predisposed Aristotle to see validity as a formal issue rather than a material one, prior to his induction about arguments and validity. One was his knowledge of mathematics; the other was his philosophical distinction between “form” and “matter.”

    The science of mathematics, especially geometry, was already well-developed as a deductive system, and this was a critical model to work with for someone working on an even more abstract science, which is what logic is. Geometry was a well-suited deductive model, with well-defined rules regarding how you approached a subject matter, the axioms you must start with, and how each theorem and proposition would unfailingly follow from the preceding, and the presentation would end with QED—“what was to be demonstrated.” He understood that broad geometric reasoning was possible because the science dealt with abstractions and not specific concretes. You could reach universal conclusions about equilateral triangles or right angles, but not to a triangle whose sides were 10 feet, 8.4 feet, and 3 feet. This might have led Aristotle to figure out that we make logical connections in accordance with abstract rules, not rules with specific contents contained within them. So in developing logic, he searched for universal rules, even more universal and abstract than geometry, which deals with space, size, shape, and figures.

    The second factor was that his entire philosophy rested on the distinction between form and matter. Practically on every issue or subject, he states that there is something with matter, its composition, and a form or structure in which the matter exists. He wasn’t always correct in applying the distinction, but it was a brilliant thought, and he used it to analyze God, the soul, perception, elements of the physical world, all kinds of animals, and even cause-and-effect (the well-known "formal" and "material" causes). With that in mind, what would make more sense than to apply the distinction to chains of thought as well, splitting every argument into its form and matter, structure and content, use abstraction to consider the greatest possible range of arguments that could exist, and conclude that in each case, the validity depended on the form and was independent of the matter? This discovery led to Aristotle’s induction that the validity of all arguments is dependent upon their form, which was the discovery of logic.

    Non-contradiction, the Excluded Middle, and Objectivity

    We have yet to find the unifying principle, however. What is common to all of the valid forms of arguments, and thus defines validity? He discovered that in every case of invalid reasoning, there was a contradiction, a mistake, a violation of the law of noncontradiction. No matter the form of the argument, an invalid argument always fails due to some sort of contradiction, some attempt to claim “A” and “non-A” at the same time and in the same respect. This led to Aristotle’s application of the principle of noncontradiction to all thought, including arguments: “…the most indisputable of all beliefs is that contradictory statements are not at the same time true” (Metaphysics Book 4, Chapter 6).

    Aristotle didn’t invent the principle or law of noncontradiction; Plato or Socrates before him might have, because in the Republic Plato writes: "It is obvious that the same thing will never do or suffer opposites in the same respect in relation to the same thing and at the same time" (4:436b). (Or as Aristotle words the principle: “the same attribute cannot at the same time belong and not belong to the same subject and in the same respect” (Metaphysics, Book 4, Chapter 3).) But Aristotle discovered the law’s role in thought, that it is the law which governs all thought trains. And he did this by another grand induction; the law’s application to thought is not deducible from the definition of knowledge or from the statement of the law. From the fact that nothing can be a contradiction, it would not follow that the invalidation of all reasoning consists in the attempt to maintain a contradiction.

    Though he wasn’t the discoverer of the law of noncontradiction, he did discover its corollary, the law of excluded middle, as well as that law’s role in thought. The law of excluded middle states that: “…there cannot be an intermediate between contradictories…” (Metaphysics, Book 4, Chapter 7). Its application to thought states that: “…of one subject we must either affirm or deny any one predicate” (Ibid.). Everything is either A or non-A at a given time and in a given respect; in thought, only the assertion or the negation of something can be true at a given time and in a given respect: there is no third alternative to assertion or negation, or to existence or non-existence, in reasoning. The laws of noncontradiction and excluded middle state the basic rule of reasoning and the principle of logical validity, non-contradiction, and the basic rule applied to all assertions: all reasoning must either affirm or deny something about some subject at a given time and in a given respect.

    We can now consider the final point: we know what a valid argument is, but what is full validation of a series of arguments—what is proof? Aristotle knew our ideas came ultimately from our sense experiences, and that the purpose of his method of logic was to conform our thinking to reality. His focus on tracking reality can be seen from this instance, while he was discussing the ambiguity of names: “the point in question is not this, whether the same thing can at the same time be and not be a man in name, but whether it can be in fact” (Metaphysics Book 4, Chapter 4; italics mine). Because of what he knew, he reached another induction: that proof is taking arguments step-by-step back to sensory data and axioms.

    The medium for the progression of a chain of ideas was what Aristotle termed a conversion, such as the one used in my deduction above, that some books are not swamps by relating them to the quality of murkiness. And the validity of a series of arguments was determined by testing the constituent propositions against the law of noncontradiction: “It is for this reason [i.e. the possibility of contradicting oneself] that all who are carrying out a demonstration [i.e. an argument that leads to knowledge] reduce it to this as an ultimate belief; for this is naturally the starting-point even for all the other axioms” (Metaphysics, Book 4, Chapter 3). The conversions allow us to reach the inferences we seek to prove using pre-established knowledge (or presumed knowledge); the syllogistic forms give us the necessary valid structures to present our reasoning; and the axioms of logic; the laws of noncontradiction and excluded middle provide a means for us to check if our reasoning is contradictory at any step.

    Gaining knowledge by choosing to adhere to reality through this method of proof which goes backward until we reach axioms and sense experience, i.e., by the use of logic, keeping in mind that the overarching principle is: noncontradiction. This is what Aristotle conceived objectivity to be.

    Conclusion: Objectivity vs. Subjectivism

    One last issue: to help clarify objectivity as Aristotle understood it, we should contrast it to an opposing idea: subjectivism. What would Aristotle’s definition of subjectivism be? Instead of “choosing to adhere to reality,” he would most likely say, “volitional indifference to or departure from reality.” Instead of “by the use of logic,” he would say, “by the disdain of logic.” And Aristotle knew plenty of examples of these, and could easily use them to fill this understanding of “subjectivism.” He discusses many instances in which people would deliberately use ambiguous words in arguments, or ask many distracting questions, or offer a proof that actually doesn’t follow, and a number of other ways in which arguments were made using some criteria other than or opposed to logic; Aristotle discusses many broad examples of logical fallacies in his Prior Analytics and On Sophistical Refutations, showing that fallacies could pertain to the form of an argument, or to the material or content of an argument.

    Aristotle then could have contrasted his view of objectivity with case after case of people being non-logical or illogical, and then induced the principle that if you use something other than logic, then you cannot claim to be adhering to reality.

    Meta-blog, automatic cross-post
  7. Like
    Roderick Fitts got a reaction from Grames in Reduction of Objectivity (Aristotle)   
    The aim of this essay is to reduce the idea of objectivity so that we can inductively reach Aristotle’s understanding of the concept. It’s important because we need his understanding of the concept to really understand Ayn Rand’s discoveries. After inducing this, we can induce the full, Objectivist understanding of objectivity from Aristotle’s development.

    The definition of objectivity Aristotle would have given: “volitional adherence to reality by the method of logic.”

    Dictionary definition: “Not affected by personal feelings; based on facts.” Based on facts, and not based on feelings—this is the main thing people understand about objectivity.

    It isn’t enough to set aside your feelings in a cognitive context without some other means of understanding facts, and “based on facts” can’t simply be about percepts, because all conceptual knowledge would be barred from the approach of objectivity. So the dictionary definition informs us that we need a method or rules of thinking that ties thinking to facts, instead of feelings.

    The first step down from this idea of objectivity is: “The method of adhering to reality to gain knowledge,” and we learn what the method is later. How would we grasp the idea that we even need a method?

    It isn’t as simple as: from observation and induction we know that man is capable of error, he’s fallible; from this, we can deduce that you can’t be certain of your conclusions and that therefore, we can deduce that we need a method of gaining knowledge to guide us: this is a rationalistic argument.

    It is necessary to grasp that we’re capable of error if we hope to even reach the concept of objectivity, but “objectivity” and “error” are vastly far apart from each other, cognitively speaking. The understanding of the fact of error came very easily, going way back into prehistory: people would bring home the wrong animal to eat, bring the wrong things needed to start a fire, etc. The striking fact, which the rationalist would overlook, is the idea that people are fallible didn’t suggest to anyone before the Greeks that we were in need of a method for checking our thinking and conclusions. In effect, the rationalist is taking as common sense what was actually a monumental discovery by the Greeks, by specifically Aristotle. The pre-Greeks had a means to deal with errors, but it wasn’t objectivity, but intrinscicism: authority, their faith in authority. The Pharaoh knows, or God knows, or whatever. It’s an invalid leap to go from “people are capable of error” to “we need a method of checking our thinking.”

    So, to grasp why we would need a method at all, we need to know something about the mind, specifically what its operations are, what is possible of the mind, where it goes wrong, and how. If we don’t know how it goes wrong, or where, or what it could be doing that is different from what it’s doing, then we have no means to improve the mind. The first thing we need to know is that there are some areas or operations of the mind in which it is safe, or infallible. We have to know that first, before we can start looking for a method, as that knowledge gives us a clue as to what we can do when we’re using a fallible process.

    Once we know that some part of our mind is error-free, we can figure out later that we can guide our minds reliably by using the safe data to check our fallible data, which is the essential process of objectivity. Later, we determine that the way to check this is to reduce all conceptual products to sensory observation. This idea of infallible data is important, because without it, we could never devise a method of guiding ourselves to the truth, and we could not count on it as underlying our conclusions, including our conclusion as to how we can improve our mental processes. There are then important distinctions which exist within our individual consciousness, which we have to discover before we could construct a method for correcting our errors, or even preventing them.

    How could someone discover that there’s a process that can go wrong as opposed to a process that is safe?

    Well, we know that we have free will, that we have control over something in our consciousness, because it would be impossible to wonder about how to guide our thinking, or find ways to improve our conclusions, if the whole operation of the mind is out of our control.

    The idea we’re getting to is that Aristotle had to make a crucial discovery: there’s a part of the mind that can go wrong, and that’s the part that we’re in control of, where our free will reigns, and that there’s a part of the mind that is safe, where we don’t need control. As a result, we can decide to check the part that can go wrong using the other, error-free part. That’s what we have to know before we can search for a method of guiding our thinking.

    What obvious major discovery about consciousness had to be made before we can determine that one part is fallible while one isn’t, and that one part is controlled by our mind, while the other is not. What’s the basic distinction of consciousness that had to be discovered before we could discover other distinctions and thus grasp the need of a method? The distinction between percepts and concepts. Not those exact words: for instance, Plato and Aristotle called the distinction “the realm of sense” and “the realm of ideas.” Ideas or Forms or Universals or Essences: how we word it is irrelevant. The point is that without this distinction, we would have no footing in prescribing guidance.

    So, we couldn’t reach the method of logic until we knew that the method was necessary and possible, and to know these we would need to know three things:

    1. We need to know what kinds of error are possible. That means that we would have to discover what kind of mental content is fallible vs. infallible. This is necessary, because it gives us a clue as to what we’re trying to correct (the fallible part), and that we’re trying to accomplish this by somehow measuring the fallible part against the infallible part.
    2. We have control over the fallible part—free will reigns over the fallible area. There’s no point in prescribing a method if we have no control over the relevant part of the mind.
    3. What is the relationship between these two areas? How could we relate, measure or reduce the fallible to the infallible?

    Once we know those three, we’ll know that a method is both necessary and possible. The final issue, between percepts and concepts, is directly observable, one by extrospection, the other by introspection.

    Meta-blog, automatic cross-post
×
×
  • Create New...