Jump to content
Objectivism Online Forum

dougclayton

Regulars
  • Posts

    152
  • Joined

  • Last visited

Posts posted by dougclayton

  1. You know something that I find ironic? People harass me because of the fact that I'm an egotist. However, if they looked closer at Objectivism, they'd realize that it's superior even from an altruist point of view; Objectivism improves society much more than their primitive measures do.

    You are right that everyone is better off living in a free capitalist country, but it is not true that "improving society" is the altruist point of view. If that were the case, they would have adopted capitalism a long time ago. I know that's what they pay lip service to, but that isn't what they are actually after.

  2. Aside from the question of the word 'validity', you've reversed things here. I would think stolen concept is implicitly (to accept your terms) accepting a concept [...] while explicitly purporting to demonstrate its denial (not, as you have put it, purporting to demonstrate its validity).

    Yes, that is what the stolen concept fallacy is. That is why Rand called it "assuming that which you are attempting to disprove." Also, I can't speak for Dave, but I believe that he means (my words inserted):

    The fallacy of the stolen concept is the implicit acceptance of a concept one is trying to deny, not as a means of demonstrating the denial's absurdity, but as part of demonstrating its [the denial's] validity.

    In other words, in reductio ad absurdum, you start with the claim and show how it logically contradicts an independently known truth. In the stolen concept fallacy, you rely on concepts derived from the stolen concept as a means of denying it. There is no "assume that..." step, and no "but this contradicts what we already know" conclusion.

    To take the classic example of "property is theft," one reductio ad absurdum argument against property might be the following:

    1. Assume property (that is, exclusive use of some material good for an individual) is good.

    2. Eventually one person will own everything and let everyone else starve, which must be good.

    3. The human race will become extinct when he dies at some point due to old age, which must be good.

    4. But human extinction is bad, so the initial assumption (that property is good) is wrong.

    As threadbare and false as this argument is, note still that it moves from an assumption of property to a contradiction of a truth that is not about property.

    Consider instead the opening argument of Proudhon's treatise, What Is Property? An Inquiry into the Principle of Right and of Government (http://dhm.best.vwh.net/archives/proudhon-ch1.html):

    If I were asked to answer the following question: WHAT IS SLAVERY? and I should answer in one word, IT IS MURDER, my meaning would be understood at once. No extended argument would be required to show that the power to take from a man his thought, his will, his personality, is a power of life and death; and that to enslave a man is to kill him. Why, then, to this other question: WHAT IS PROPERTY! may I not likewise answer, IT IS ROBBERY, without the certainty of being misunderstood; the second proposition being no other than a transformation of the first?
    (By "property," he is actually referring to "unused land that you rent out," but his failure to essentialize his concepts properly does not change the fundamental error.)

    For this to be reductio ad absurdum, you'd have to start with "property is good" and then show that leads to a conclusion independently known to be false. Thus if this were, his absurdity would have to be "and the robbery that this transformation yields is good," but this is not independently known to be false. Furthermore, it cannot be independently known to be false, because in fact the concept "robbery" is dependent on "property"--and ignoring this strict dependence is exactly the fallacy.

    Note too that he does not commit the fallacy in his claim about slavery, which can be seen just by the fact that murder is not a concept based on slavery. (To match his second claim, his first would have to be, "Life is murder.")

    Is that at all useful in illustrating the difference?

  3. What kind of random number generator are you using?

    That's funny, that was going to be my question.

    Well, the code I am using to generate the random numbers goes something like this:

    Randomize()

    randomnumber = CInt(Int((DiceType * Rnd()) + 1))

    This is the advice the Help Index gave me on the subject, and it seems to work well enough.

    One warning: if you are planning to get statistical-grade results, and your features suggest you are, forget about Rnd (and pretty much any other standard VB functions). They're terrible.

  4. Economic Growth can be defined as :

    [[GDP(at time t) - GDP(at timet-1)]/GDP(at time t-1)]*100

    You know, I'd wager that Jennifer knows how to compute the percentage change in some variable. :) I don't think that's what she was asking.

  5. While I'm glad to see you do plan to develop one single feature at a time, it seems as if you have misunderstood parts of my post.

    Evidently I feel that it isn't a waste of my time, otherwise I wouldn't of asked for it. And if the critism is done constructively and rationally, I do not think it would be inappropriate for this forum, especially since I asked for it.

    I wasn't intending to offend. I was happy to reply, otherwise I wouldn't have, either. I like talking about things like this, particularly to someone as excited as you.

    I said critiquing your specific feature list would be a waste of time because 1) you won't get to most of those features for a long time if you are truly developing incrementally, so criticism would be premature, and 2) I'm not an author anyway, so my opinion on the relevance to writing a book is nearly worthless. But I do have relevant opinions on the development approach implied by that huge list, so that's what I focused on.

    Furthermore, I said it would be inappropriate to this forum because it by itself is not related to the stated purpose of this forum, which is "trade [about] information about Objectivism and discussion about its applications." Thus I posted only because I could apply Objectivist principles to software development. I have been corrected and "uncorrected" about this before, so I am not sure what is and isn't allowed.

    I hope that removes any hint I might have given that my intent was to argue against your project or your discussing it here.

    Wrong, it won't be just another Notepad if I "figure out what is the absolute bare minimum I need to develop to make something usable by me alone" because most of those features are there because as an author they are useful to me.

    You may have interpreted the "Notepad" comment as a criticism or dismissal, but it was neither. The plain truth of the matter is that you will need all of Notepad's features (basic text editing, find/replace, word wrap) before you can add any author-specific features, because writing a book is, at its lowest level, writing text. That is the reason I recommended shamelessly striving for a trivial text editor (colloquially referred to as "Notepad") as version 0.1--it wasn't because all you will ever do is "just another Notepad." To misquote Francisco: "It is against the sin of overly large expectations that I wanted to warn you."

    Whether it takes you 1 day or 1 month to get a simple text editor depends on your skill level, but the fact that you have to have a small working program before you have a large working program does not. Maybe you know better, but you'd be surprised how many newcomers think they can somehow skip making a fully tested but "boring" simple program because they are so eager to get to the "fun stuff" like all the features that will set their program apart.

    While I agree in principle that a programmer has to work their way up, I think that "Hello World" is an unneccasrily simple start.

    Then you have not understood the principle of Hello World. It's so pervasive because everyone thinks that making a program say "Hello world" is too small to bother with, before they've written it. (It is also partly an aspect of a foregone era, in which compiling and linking C was more difficult than today's IDEs.)

    The principle of it is that one should write the smallest possible thing that compiles as one's first attempt at a new language or platform. (The concrete example of actually printing a single line of "Hello world" just happens to be the smallest detectable success for C-style programs.) If you've coded in C or C++ before, you don't start with Hello World for every new project. But since apparently you haven't, you will at some point write your very first program that you expect to compile and run, and there's no need to make that more complicated than it has to be. In fact, take a look at your tutorial and tell me if they don't start out with something small and trivial so they can focus on the basic build step ("code, compile, test").

    Anyhow, good luck with your project. It sounds like you've put a lot of thought into it and are eager to get started.

  6. I have decided to give my word processor the working name of Authorial Word. I have decided on many features and plug-ins and here they are:

    <snip far too many features>

    It would be a waste of our time and inappropriate for this forum for me to critique your feature list, so let me offer some general advice. (My qualification: I have been developing software for over 10 years.) Your feature list, not counting the plugins, would require at least 10 years's development for a single experienced developer. Since I don't think you want to wait that long before you have something useful, I'd like to suggest a different approach than laying out all the features you want:

    Incremental development

    Figure out what is the absolute bare minimum you need to develop to make something usable by you alone. (No one else will want to use it at this stage.) Now take that bare minimum and throw out the least important 75% of it. If you do this right, this will be so small it will hardly seem worthwhile. (At this point, it won't be much more than Notepad.) The goal is to make a program that you can start using as absolutely soon as possible, so you can begin "eating your own dogfood," as they call it. Then, add in a single feature at a time (namely, the one you most personally wish you had, given your use of this program to write your book), always keeping the program fully functional.

    To tie this in to philosophy: most methods of software development suffer from ivory-tower rationalism, the error that proper principles (specifically, proper software organization) can be developed without any experience (specifically, any actual programming). This leads to the "waterfall model" (http://en.wikipedia.org/wiki/Waterfall_model) or "big design up front." Of course, naturally the opponents (the "extreme programmers") tend to suffer from empiricism to some degree, the notion that all design (i.e., principles) is worthless and you should code without any planning.

    Your best bet is an objective method: a method that starts with observation of the facts, then moves to generalization to abstract principles and application of those principles to new situations. Applying that principle to software development leads to starting with first-level concepts, so to speak: getting a "hello world" program (http://en.wikipedia.org/wiki/Hello_world_program) to work. Observing what works and what doesn't work will lead you to further principles you can apply (such as the "principle of least surprise," abstraction, encapsulation, etc). You can then apply a principle you induced from one set of concretes (like "information hiding," http://en.wikipedia.org/wiki/Information_hiding) to new concretes, saving you from making the same errors over and over again.

    Contrary to empiricism, you can develop principles that will help you plan your next project. But, contrary to rationalism, you won't be able to know what are proper principles and what are improper until they are tested in the real world, so to speak. Incremental development is the best way to ensure your success.

  7. So how can an information-less statement be used to imply any facts about reality?

    It states an aspect of reality that is implicit in the nature of being conscious. Its utility is therefore in making this heretofore implicit truth explicit, so you can deliberately avoid contradicting it. And although it is implicit in every statement ever made by anyone, it nonetheless deserves an explicit formulation, which cannot be condemned because it is "obvious."

    In truth, I am not sure what your point is. You don't seem to be saying that nothing exists, so why are you so convinced that "something exists" is meaningless? You imply that the claim that "nothing exists" is wrong, so doesn't that mean the claim that something exists is right?

  8. Very well executed studies of genetically identical people show that genetics is a huge factor (not entirely of course) but THE SINGLE MOST IMPORTANT factor determining intelligence and various other personality traits. This alone refutes the ridiculous notion that man is born with a blank slate mind, among other things. This is simply false in the face of evidence.

    I suspect you may not know what the term "blank slate" refers to. In trying to explicate to myself what was wrong with your conclusion, I came up with the following analogy. Consider the term "blank film" instead of "blank slate." In saying that we are born "blank film," we mean that the film is unexposed; it has no images or content on it. However, this does not mean that all film is alike. You can get very high quality film that will reproduce images with near-perfect fidelity (which you can use to take pictures of worthy scenes or random garbage, as you wish). You can also get cheap film which will only record grainy, low-resolution images (with which, again, you can choose to capture great scenes or lousy ones). Just as obviously, this distinction between high-quality and low-quality film does not mean that high-quality film comes with pictures already on it.

    In this analogy, the film represents a baby's mind at birth: with certain properties like intelligence already set at a certain level, but no content whatsoever. That remains for the child to fill in as he chooses what to "take pictures of," so to speak.

    In short: A baby's mind is as devoid of ideas as a new roll of film is devoid of images.

  9. Now I need some clarification. I haven't actually read Hume, just other people's references to him, so I may be misunderstanding things a bit. It is my understanding that he didn't necessarily attack metaphysical causality; rather, he said that we can not make an epistemological inference of causality through observation. I was under the impression that he didn't deny causes exist, just that we can't know them. That's why I say he attacked induction, rather than causality as such. Is my understanding incorrect?

    (Caveat: I am not a professional philosophical historian. I don't even watch them on TV.) Executive summary: he said primarily that we cannot possibly know causation, but it follows pretty quickly for him and his followers that it doesn't exist.

    It has been over a decade since I read Hume, so I decided to check my claim. I don't have any primary sources by Hume handy, so I will have to do with a web search. From wikipedia, which I am quoting because 1) it tends to be most accurate when it summarizes a school of thought, rather than asserting whether that school is correct, and 2) the author(s) clearly agree with Hume:

    When one event causes another, most people think that we are aware of a connection between the two that makes the second event follow from the first. Hume challenged this belief, noting that whereas we do perceive the two events, we don't perceive any necessary connection between the two. And how else but through perception could we gain knowledge of this mysterious connection? Hume denied that we could have any idea of causation other than the following: when we see that two events always occur together, we tend to form an expectation that when the first occurs, the second will soon follow. This "constant conjunction" and the expectation thereof is all that we can know of causation, and all that to which our idea of causation can amount.

    From the Catholic Encyclopedia:

    Having previously reduced mind to no more than a succession of perceptions, he declares: "To me there appear to be only three principles of connection among ideas, namely, Resemblance, Contiguity in time or place, and Cause or Effect" (Works, IV, 18). Thus, for Hume, causality is no more than a relation between ideas. It is not an a priori relation, "but arises entirely from experience, when we find that any particular objects are constantly conjoined with each other" (ibid., 24). However, we can never comprehend any force or power, by which the cause operates, or any connection between it and its supposed effect. The same difficulty occurs in contemplating the operations of mind on body.... So that, upon the whole, there appears not, throughout all nature, any one instance of connection, which is conceivable by us" (ibid., 61 sqq.). Whence, then, does our conception of cause come? Not from a single observed sequence of one event from another, for that is not a sufficient warranty for us to form any general rule, but from the conjunction of one particular species of event with another, in all observed instances. "But there is nothing", he writes,

    in a number of instances, different from every single instance, which is supposed to be exactly; except only, that after a repetition of similar instances, the mind is carried by habit, upon the appearance of one event, to expect its usual attendant, and to believe that it will exist.... When we say, therefore, that one object is connected with another, we mean only, that they have acquired a connection in our thought, and give rise to this inference, by which they become proofs of each other's existence (p. 63)

    Hence Hume defines cause as that object, followed by another, "where, if the first object had not been, the second would never have existed", or "an object followed by another, and whose appearance always conveys the thought to that other" (ibid.). In this doctrine Hume advances a psychological explanation of the origin of the idea (habit), but inculcates an utter scepticism as to the reality of causation.

    From this and some other reading I did, I would say that he rejects not only our knowledge of necessary causation (epistemologically), but the fact of it as well (metaphysically). Naturally he still speaks of "cause," but to him it means something entirely different than to Aristotelians/Objectivists: he means a mental construct that comes from "constant conjunction" unchanged only for as long as we have observed it, whereas we mean "identity in action" and therefore universal.

    Thus the chickens come home to roost when it comes to induction:

    Most of us think that the past acts as a reliable guide to the future. For example, physicists' laws of planetary orbits work for describing past planetary behavior, so we presume that they'll work for describing future planetary behavior as well. But how can we justify this presumption – the principle of induction? Hume suggested two possible justifications and rejected them both:

    The first justification states that, as a matter of logical necessity, the future must resemble the past. But, Hume pointed out, we can conceive of a chaotic, erratic world where the future has nothing to do with the past – or, more tamely, a world just like ours right up until the present, at which point things change completely. So nothing makes the principle of induction logically necessary.

    The second justification, more modestly, appeals only to the past reliability of induction – it's always worked before, so it will probably continue to work. But, Hume pointed out, this justification uses circular reasoning, justifying induction by an appeal that requires induction to gain any force.

    So since induction requires determination of the underlying cause, Hume naturally rejects induction as a consequence of his view of causality.

    As a bonus, the analytic/synthetic dichotomy rears its ugly head here, in the words "logically necessary." I have to give credit here to Dr. Peikoff's deep but worthwhile essay in ITOE for my understanding of how a denial of identity (and its corollary, causality) is at the heart of the dichotomy.

  10. Well, I have to say, David, that your familiarity with formal logic is both appealing and distracting: it's good to see someone who is clearly knowledgeable enough not to throw out the baby (formal notation and rigor) with the bathwater (the philosophy behind modern formal logic). On the other hand, it can be hard to read to someone whose last use of this was in college a decade ago. So let me try to translate:

    From a formal POV, induction refers to something like the rule inductive generalization, which introduces a universally quantified proposition. The main obstacle to integrating induction into formal inference has been the problem of deriving ^Ex(P(x))

    Given that you talk about a universally quantified proposition, I would expect you to refer to deriving:

    for all x, P(x) is true

    Granted the upside-down A is hard to do in plain text, but typing in "& forall;" (without the space or quotes) and previewing twice does it: ∀ (at least in the Opera browser). But since you have what appears to be ∃ (& exist;) as a forwards E and ^ as a negative, I get:

    there is no x such that P(x) is true

    This is identical to

    for all x, ^P(x)

    But usually one does not express things in the negative: Newton's third law is not "there is no action that has an unequal or non-opposite reaction," even though it is logically identical. Thus I will assume you mean "for all x, P(x)."

    Here's my claim: from a formal point of view, "induction" is a rule of inference which introduces ^Ex(P(x)) when in a set of propositions that define a (knowledge) context P(y) is not present (for any value y).

    In plain English, I would expect this means that induction introduces a new universal proposition P that cannot be deduced from the existing known propositions. (If it can be so deduced, you get the flaw I referred to earlier, in that mathematical induction is not introducing new propositions, but extracting truths that derive from pre-existing propositions.)

    The rest is garden variety deductive logic. The problem from the classical POV is that you don't take into consideration such a thing as "the set of propositions admitted to be true" (a knowledge context), but once you have such a concept, it's obvious that introducing ^Ex(P(x)) when P(x) is not true does state the same kind of truth as AV^A.
    I could speculate on what this means, but several different interpretations I made all yielded contradictions, so I will just have to let you tell me. (I know AV^A is intended to be a tautology because it must be "true OR false" or "false OR true.")

    There is a point which I used to carp on incessantly, that in a formal system, induction and deduction are not different in an earth-shattering way: both are inferences.

    I am coming to believe this. I should probably get Dr. Peikoff's lectures on this to find out more on how "induction is measurement omission applied to causality."

    Addendum: I get blocks instead of the "forall" and "exists" symbols in Internet Explorer, which I expect many forum members use. Sorry.

  11. Does Hume not object to causality itself, on the grounds that past cause/effect observation is not solid evidence that in the future the cause/consequence chain will remain valid?

    I believe that is his fundamental objection, yes: that in perceiving A do some action B, one only sees what could have been one way--one does not perceive any "necessity" about it anywhere. Thus, since things can be some way without necessarily being that way, one cannot say how they will be in the next instant. Thus he really attacked causality, like you say, and not induction per se.

  12. Complete induction derives the truth of the statement by testing it for every single n.

    But "testing it for every single n" is exactly what a proof by induction does not do. That would mean, for the claim that 1 + 3 + 5 + ... + (2n-1) = n^2, you would have to compute the sum of all odd integers from 1 to 2n-1 and see that the computation was equal to n^2, for every single n. In a proof by induction, you do this just once, for the base case. The equation is shown to be true for every n, but it is not tested for each n.

    If you tried this in biology you would have to look at every single bird and look if it has the same characteristics as the last bird. Then the next, then the next ...

    I hate to contradict you again, but this is what science does when it uses induction--only it no more looks at "each bird" than a mathematician evaluates the formula above for "each n." How do we know that man is mortal? Certainly not by looking at every single man to see if it has the same characteristics as the last man, then the next, then the next...

  13. But the more I think about it, the more I think there is fair cause for calling that induction.

    Boy, there's nothing like posting something to a public forum for making you questions your assumptions. In particular, both my examples were the application of wider principles to a specific context (algebraic manipulation in one and Newtonian mechanics in the other). Thus I can see how they could be called deductive, rather than inductive. The truly inductive conclusions would be, say, the Newtonian mechanics in the first place.

    I still welcome feedback, but I will have to think more about this.

  14. Mathematicians tend to eschew observation and induction (except for "mathematical induction" which is really a type of deductive inference), so they would be exceptions.

    You know, that's something I've been wondering about lately. I used to be bothered by the term "induction" used for the mathematical method of proof that consists of the following steps:

    1. Show something is true for a base case (say, n = A).

    2. Show that if something is true for n, then it is also true for n+1.

    3. By extension, since it is true for A, A+1, A+1+1, etc, it is true for all n >= A.

    But the more I think about it, the more I think there is fair cause for calling that induction. Specifically, the essential aspect of the proof is step 2: showing that the truth for n+1 follows directly from the truth of n and the mathematical properties of numbers. And, in the end, we have a wide claim for all n that has only been directly observed for some n. This means that mathematical induction is ladder-like, in that there is a starting point directly observed and a "link" in a causal chain. Not all induction has this form, so mathematical induction is a subset of induction in general.

    Furthermore, some inductive conclusions seem to have this mathematical form. For instance, the (proper) claim that the sun will rise every 24 hours would take the "base case" that the Earth is observed to have a certain rotational velocity (by measuring the time elapsed from noon to noon). Then we show that, by the laws of mechanics, rotation will continue unabated since there is no external force acting to slow down the rotation. Thus, we know that the sun will appear to rise every day in the future. In a sense, the laws of mechanics fill in for #2, that is, if there is a celestial body with a given rotational velocity and orientation, then N hours later it will be in the same state again. Direct observation of the celestial positions today and tomorrow gives us #1, and it is then a deductive inference that #3 will hold. (Naturally, there is actually some slowing down that occurs, so you would have to speak of measurable deviation from the current length of "one day.")

    Although I am pretty convinced of the validity of this view, I definitely welcome comments or criticism. I don't want to commit the fallacy of "hasty generalization" myself. :D

  15. So my dilemma is, should i take some of these classes just to check out other philosophies, or would it just be a waste of my time?

    I would answer this by looking at the historical content of the classes. I took one philosophy class in college and it was useful because it was History of Modern Philosophy (which meant Descartes, Locke, Hume and Kant). Regardless of how bad the philosophy is, it is valuable to know what certain philosophers said, what they meant, and what historical context they operated in. Furthermore, real philosophers before the 20th century were trying to solve real problems and were almost always quite intelligent, and thus worth your time to grapple with--even if you end up disagreeing with them.

    On the other hand, I would unequivocally skip any class (if you can) that amounts to bull sessions where the students themselves debate philosophical topics (for instance, "what is ethical?"). You will not reach any minds in that class with your own arguments, nor will you hear any arguments worth refuting.

  16. Thank you. I attempted to do what you suggested -- read through the threads about induction -- and WOW! How does one separate the wheat from the chaff?

    The same way you separate the wheat from any chaff--lots of thinking and introspecting. :D I wasn't trying to claim you could read them once and understand everything. I remember reading the first thread on it over a year ago, and being shocked to read for the first time that induction was simply not counting swans or sunrises and generalizing into the future. It wasn't until I came back to this forum several months ago that the proper approach started to make some sense (the notion of "causality" is key).

    How about a good book on the subject?
    Sadly, I don't think there is a book (yet) that is better than the combined postings of the OO.net members (some more than others, of course).

    I thought that induction was basically the process of taking observed individual occurrences and making a broad true statement about them.

    Well, it is, if you stretch the word "basically" far enough. There are many different ways of "taking observed individual occurrences and making a broad true statement about them," and not all of them are valid logical methods. (The fallacies of "hasty generalization" and "post hoc ergo propter hoc" come to mind, for instance.)

  17. As far as induction goes … how many times must we observe something before we can conclude with certainty that it will act the same way again?

    This is a mistaken notion of induction. Have you read all the threads on induction on this forum? (You can search for threads whose titles contain "induction" with the search feature.) I highly suggest those if you are interested in understanding the error in your question. In particular, the argument that induction is method of causality (not counting) has been made several times. If there any specific parts of the argument that you don't understand or disagree with, I'd be happy to elaborate further.

  18. A.R.(VOS_p22): "The process of concept-formation does not consist merely of grasping a few simple abstractions, such as "chair," "table," "hot," "cold," and of learning to speak. It consists of a method of using one's consciousness, best designated by the term "conceptualizing." It is not a passive state of registering random impressions. It is an actively sustained process of identifying one's impressions in conceptual terms, of integrating every event and every observation into a conceptual context, of grasping relationships, differences, similarities in one's perceptual material and of abstracting them into new concepts, of drawing inferences, of making deductions, of reaching conclusions, of asking new questions and discovering new answers and expanding one's knowledge into an ever-growing sum. The faculty that directs this process, the faculty that works by means of concepts, is: reason. The process is thinking." (Bold added by RSalar.)

    Yes, she says deduction, but right before that she says "drawing inferences," which is induction. Both are essential aspects of reasoning. Also, consider that she is not enumerating strictly separate methods, but poetically stating and restating different aspects to reasoning. (For instance, "reaching conclusions" can be either deduction or induction based on whether you are reaching a wider generalized conclusion or a narrower one about a particular concrete.) Then she summarizes all this by naming it "reasoning" (or "thinking," which is synonymous in this context).

    The error about "deducing reality," at least as used in an Objectivist context, refers to the error of starting with the axioms and deducing every other piece of knowledge. On this premise of rationalism, you could deduce that man has rights knowing nothing more than that A is A. (This is a very common approach among math-oriented people.) Now it is true that you must rely on the truth that A is A to properly arrive at any conclusion, but that does not mean that that is all you need. You need to look at the facts of any given context to induce conclusions about those contexts. This means in the case of "man has rights," for instance, you need to consider all the relevant facts about man, none of which is implied in "existence exists."

  19. But it goes even deeper than that. Happiness isn't a primary -- the question is, why does your child's welfare make you happy?

    You're right that happiness is not an end-in-itself either. I should have been more clear that my point was only to show by hunterrose's own justification ("it might just be that a person is happy by helping his child") that caring about a child's welfare relied on something more fundamental (namely, happiness). But, as you say, happiness is still not the end-in-itself we are looking for.

  20. I only meant a child's welfare to be a possible end-in-itself, not necessarily an ultimate value.

    As far as why a child's welfare might be an end-in-itself (taking an ultimate value to be a person's only end-in-itself,) it might just be that a person is happy by helping his child. If such a person values nothing else as an end-in-itself, the child's welfare would also be the ultimate value.

    But you've given the answer yourself: by saying a person values a child's welfare because it makes that person happy, you've made that person's happiness a more fundamental value than the child's welfare. In other words, the reason the person aids the child is because it makes the person more happy--so the child's welfare is not an end-in-itself.

  21. Doesn't this mean that there are some areas where consciousness can be made to alter existence, even if that part of existence is inside my body?

    Of course. Did anyone say otherwise?

    Also we can use drugs to alter our emotions as well: administering chemicals to the brain can affect subjective experience (consciousness). Therefore, there are ways for existence to alter consciousness also.

    Again, of course. This is what I meant by my "damaged spinal cord" example. Damaging one's brain is no different.

    But if atoms are purely physical objects, with nothing but physical properties and physical relations to one another, and my consciousness affects and is affected by atoms, doesn't this mean my consciousness is purely physical?

    This doesn't follow. If it did, let me ask you a question: what is the molecular weight of consciousness? What is its reflectivity, or its electrical charge, or its density? Are these meaningful questions about consciousness?

    Another way to think of this is to imagine a computer, which runs programs on physical hardware. This program interacts with the hardware through strictly physical means (specifically, electrical voltage and current), but does this mean the program is purely physical? If so, what is the physical size (in inches) of a program? What is its temperature? I can tell you those about the hardware that makes up a computer (which corresponds to your body and brain), but it doesn't even make sense to ask those about a computer program (which corresponds, in some ways, to your mind--that is, consciousness).

  22. I have read just about every published work of Ayn Rand. That does not mean that I understand it all, nor does it mean that even if I was to read a particular one ten more times I would understand it. Based on the posts here I would venture to say that most people here may think they understand her but do not fully. Her work on concept formation via measurement omission is not exactly an easily grasped concept. I am here to ask questions and hopefully glean a little insight. I do not take any of these posts as the definitive authorized Ayn Rand position. And I do not claim to be stating the official Ayn Rand position.

    The idea that proper names are not concepts confuses me. That does not mean I am here to disagree with Rand's position. I may challenge it to see if there is someone here who us able to clarify her position so that it makes sense to me. Even if you or the other posters are unable to show me why she was right I will continue to believe that she was probably correct (because on everything else she was right) but will not hesitate to challenge her position to see if it will stand up to tough analytical scrutiny.

    You have misunderstood my point. I was most emphatically not stating that you should blindly agree with her, nor that disagreement is frowned on (in fact, disagreement--at least temporarily--is the whole point of this forum). Furthermore, I was not stating that you should read AR "ten more times" until you understand it (that would be intrincism--that the words themselves can implant themselves as truth in your mind). My point was that you present yourself as familiar with her works, and yet appear to be unaware of a very basic position she takes (namely, that concepts are integrations of more than one existent).

    Let me demonstrate:

    I was thinking that a concept was simply something formed in the mind--a thought, notion or mental construct. But you are saying that some mental constructs, specifically names, are not concepts. Is that the official Objectivist position?

    (First, let me point out that this was not Fred's position--his position was that proper nouns are not concepts. But your later writing indicates you understand this distinction.)

    It's impossible to debate anything with someone who declines any suggestions to (re)read AR's standard work on a topic, then asks if some notion (which, it turns out, she states on page 10 of ITOE--six pages into the work) is the "official Objectivist position." How can we judge when you are unaware of her position or disagreeing with her? How do we know if the answer is "read this book" or an attempt to demonstrate the truth of the issue? For that matter, even if we assume you are disagreeing with her and proceed to answer your question, how do we know which part you disagree with?

    Lest you think this is just an issue with proper nouns, the same problem crops up when you ask whether you can have a concept without giving it a name. In the second edition of ITOE, the appendix gives 10 pages of discussion on this exact question. If you do not know about that section, the best answer I could give is "read the chapter 'The Role of Words,' pp 163-174," because there's little I could say as a general answer that would be any better than that discussion. On the other hand, if you have read it and then disagree with some point, I would expect you would say something along the lines of, "AR says on page X that Y is true" and follow up with either, "I don't understand this. Why?" or "This is wrong, because Z." Either lets us know you have already read the relevant section, so that we know your context of questioning.

    On a personal note: you strike me as a very honest, active-minded person. Your questions and understanding are much more advanced than those of most people that we tend to get here. (In fact, you correctly pointed out a flaw in my argument in my first post to this thread.) By far the most profitable use of your time would be to study ITOE first when you have some question on these issues, and then come here for follow-up debate or critique. Again, I am not suggesting you cannot disagree. It's just that the question that opened the thread (what measurements are omitted in 'inch'?) was a much better question for debate on this forum because AR did not give a definitive well-explained answer on it, and I for one have had problems seeing the answer.

    I hope this made it clear that I welcome those who "challenge her position" or "understand her but not fully."

×
×
  • Create New...