Jump to content
Objectivism Online Forum

dougclayton

Regulars
  • Posts

    152
  • Joined

  • Last visited

Everything posted by dougclayton

  1. You are right that everyone is better off living in a free capitalist country, but it is not true that "improving society" is the altruist point of view. If that were the case, they would have adopted capitalism a long time ago. I know that's what they pay lip service to, but that isn't what they are actually after.
  2. Yes, that is what the stolen concept fallacy is. That is why Rand called it "assuming that which you are attempting to disprove." Also, I can't speak for Dave, but I believe that he means (my words inserted): In other words, in reductio ad absurdum, you start with the claim and show how it logically contradicts an independently known truth. In the stolen concept fallacy, you rely on concepts derived from the stolen concept as a means of denying it. There is no "assume that..." step, and no "but this contradicts what we already know" conclusion. To take the classic example of "property is theft," one reductio ad absurdum argument against property might be the following: 1. Assume property (that is, exclusive use of some material good for an individual) is good. 2. Eventually one person will own everything and let everyone else starve, which must be good. 3. The human race will become extinct when he dies at some point due to old age, which must be good. 4. But human extinction is bad, so the initial assumption (that property is good) is wrong. As threadbare and false as this argument is, note still that it moves from an assumption of property to a contradiction of a truth that is not about property. Consider instead the opening argument of Proudhon's treatise, What Is Property? An Inquiry into the Principle of Right and of Government (http://dhm.best.vwh.net/archives/proudhon-ch1.html): (By "property," he is actually referring to "unused land that you rent out," but his failure to essentialize his concepts properly does not change the fundamental error.) For this to be reductio ad absurdum, you'd have to start with "property is good" and then show that leads to a conclusion independently known to be false. Thus if this were, his absurdity would have to be "and the robbery that this transformation yields is good," but this is not independently known to be false. Furthermore, it cannot be independently known to be false, because in fact the concept "robbery" is dependent on "property"--and ignoring this strict dependence is exactly the fallacy. Note too that he does not commit the fallacy in his claim about slavery, which can be seen just by the fact that murder is not a concept based on slavery. (To match his second claim, his first would have to be, "Life is murder.") Is that at all useful in illustrating the difference?
  3. That's funny, that was going to be my question. One warning: if you are planning to get statistical-grade results, and your features suggest you are, forget about Rnd (and pretty much any other standard VB functions). They're terrible.
  4. You know, I'd wager that Jennifer knows how to compute the percentage change in some variable. I don't think that's what she was asking.
  5. While I'm glad to see you do plan to develop one single feature at a time, it seems as if you have misunderstood parts of my post. I wasn't intending to offend. I was happy to reply, otherwise I wouldn't have, either. I like talking about things like this, particularly to someone as excited as you. I said critiquing your specific feature list would be a waste of time because 1) you won't get to most of those features for a long time if you are truly developing incrementally, so criticism would be premature, and 2) I'm not an author anyway, so my opinion on the relevance to writing a book is nearly worthless. But I do have relevant opinions on the development approach implied by that huge list, so that's what I focused on. Furthermore, I said it would be inappropriate to this forum because it by itself is not related to the stated purpose of this forum, which is "trade [about] information about Objectivism and discussion about its applications." Thus I posted only because I could apply Objectivist principles to software development. I have been corrected and "uncorrected" about this before, so I am not sure what is and isn't allowed. I hope that removes any hint I might have given that my intent was to argue against your project or your discussing it here. You may have interpreted the "Notepad" comment as a criticism or dismissal, but it was neither. The plain truth of the matter is that you will need all of Notepad's features (basic text editing, find/replace, word wrap) before you can add any author-specific features, because writing a book is, at its lowest level, writing text. That is the reason I recommended shamelessly striving for a trivial text editor (colloquially referred to as "Notepad") as version 0.1--it wasn't because all you will ever do is "just another Notepad." To misquote Francisco: "It is against the sin of overly large expectations that I wanted to warn you." Whether it takes you 1 day or 1 month to get a simple text editor depends on your skill level, but the fact that you have to have a small working program before you have a large working program does not. Maybe you know better, but you'd be surprised how many newcomers think they can somehow skip making a fully tested but "boring" simple program because they are so eager to get to the "fun stuff" like all the features that will set their program apart. Then you have not understood the principle of Hello World. It's so pervasive because everyone thinks that making a program say "Hello world" is too small to bother with, before they've written it. (It is also partly an aspect of a foregone era, in which compiling and linking C was more difficult than today's IDEs.) The principle of it is that one should write the smallest possible thing that compiles as one's first attempt at a new language or platform. (The concrete example of actually printing a single line of "Hello world" just happens to be the smallest detectable success for C-style programs.) If you've coded in C or C++ before, you don't start with Hello World for every new project. But since apparently you haven't, you will at some point write your very first program that you expect to compile and run, and there's no need to make that more complicated than it has to be. In fact, take a look at your tutorial and tell me if they don't start out with something small and trivial so they can focus on the basic build step ("code, compile, test"). Anyhow, good luck with your project. It sounds like you've put a lot of thought into it and are eager to get started.
  6. It would be a waste of our time and inappropriate for this forum for me to critique your feature list, so let me offer some general advice. (My qualification: I have been developing software for over 10 years.) Your feature list, not counting the plugins, would require at least 10 years's development for a single experienced developer. Since I don't think you want to wait that long before you have something useful, I'd like to suggest a different approach than laying out all the features you want: Incremental development Figure out what is the absolute bare minimum you need to develop to make something usable by you alone. (No one else will want to use it at this stage.) Now take that bare minimum and throw out the least important 75% of it. If you do this right, this will be so small it will hardly seem worthwhile. (At this point, it won't be much more than Notepad.) The goal is to make a program that you can start using as absolutely soon as possible, so you can begin "eating your own dogfood," as they call it. Then, add in a single feature at a time (namely, the one you most personally wish you had, given your use of this program to write your book), always keeping the program fully functional. To tie this in to philosophy: most methods of software development suffer from ivory-tower rationalism, the error that proper principles (specifically, proper software organization) can be developed without any experience (specifically, any actual programming). This leads to the "waterfall model" (http://en.wikipedia.org/wiki/Waterfall_model) or "big design up front." Of course, naturally the opponents (the "extreme programmers") tend to suffer from empiricism to some degree, the notion that all design (i.e., principles) is worthless and you should code without any planning. Your best bet is an objective method: a method that starts with observation of the facts, then moves to generalization to abstract principles and application of those principles to new situations. Applying that principle to software development leads to starting with first-level concepts, so to speak: getting a "hello world" program (http://en.wikipedia.org/wiki/Hello_world_program) to work. Observing what works and what doesn't work will lead you to further principles you can apply (such as the "principle of least surprise," abstraction, encapsulation, etc). You can then apply a principle you induced from one set of concretes (like "information hiding," http://en.wikipedia.org/wiki/Information_hiding) to new concretes, saving you from making the same errors over and over again. Contrary to empiricism, you can develop principles that will help you plan your next project. But, contrary to rationalism, you won't be able to know what are proper principles and what are improper until they are tested in the real world, so to speak. Incremental development is the best way to ensure your success.
  7. It states an aspect of reality that is implicit in the nature of being conscious. Its utility is therefore in making this heretofore implicit truth explicit, so you can deliberately avoid contradicting it. And although it is implicit in every statement ever made by anyone, it nonetheless deserves an explicit formulation, which cannot be condemned because it is "obvious." In truth, I am not sure what your point is. You don't seem to be saying that nothing exists, so why are you so convinced that "something exists" is meaningless? You imply that the claim that "nothing exists" is wrong, so doesn't that mean the claim that something exists is right?
  8. Could you explain to me what it means to base an argument on a definition that is circular and makes the argument unquestionable?
  9. I suspect you may not know what the term "blank slate" refers to. In trying to explicate to myself what was wrong with your conclusion, I came up with the following analogy. Consider the term "blank film" instead of "blank slate." In saying that we are born "blank film," we mean that the film is unexposed; it has no images or content on it. However, this does not mean that all film is alike. You can get very high quality film that will reproduce images with near-perfect fidelity (which you can use to take pictures of worthy scenes or random garbage, as you wish). You can also get cheap film which will only record grainy, low-resolution images (with which, again, you can choose to capture great scenes or lousy ones). Just as obviously, this distinction between high-quality and low-quality film does not mean that high-quality film comes with pictures already on it. In this analogy, the film represents a baby's mind at birth: with certain properties like intelligence already set at a certain level, but no content whatsoever. That remains for the child to fill in as he chooses what to "take pictures of," so to speak. In short: A baby's mind is as devoid of ideas as a new roll of film is devoid of images.
  10. (Caveat: I am not a professional philosophical historian. I don't even watch them on TV.) Executive summary: he said primarily that we cannot possibly know causation, but it follows pretty quickly for him and his followers that it doesn't exist. It has been over a decade since I read Hume, so I decided to check my claim. I don't have any primary sources by Hume handy, so I will have to do with a web search. From wikipedia, which I am quoting because 1) it tends to be most accurate when it summarizes a school of thought, rather than asserting whether that school is correct, and 2) the author(s) clearly agree with Hume: From the Catholic Encyclopedia: From this and some other reading I did, I would say that he rejects not only our knowledge of necessary causation (epistemologically), but the fact of it as well (metaphysically). Naturally he still speaks of "cause," but to him it means something entirely different than to Aristotelians/Objectivists: he means a mental construct that comes from "constant conjunction" unchanged only for as long as we have observed it, whereas we mean "identity in action" and therefore universal. Thus the chickens come home to roost when it comes to induction: So since induction requires determination of the underlying cause, Hume naturally rejects induction as a consequence of his view of causality. As a bonus, the analytic/synthetic dichotomy rears its ugly head here, in the words "logically necessary." I have to give credit here to Dr. Peikoff's deep but worthwhile essay in ITOE for my understanding of how a denial of identity (and its corollary, causality) is at the heart of the dichotomy.
  11. Well, I have to say, David, that your familiarity with formal logic is both appealing and distracting: it's good to see someone who is clearly knowledgeable enough not to throw out the baby (formal notation and rigor) with the bathwater (the philosophy behind modern formal logic). On the other hand, it can be hard to read to someone whose last use of this was in college a decade ago. So let me try to translate: Given that you talk about a universally quantified proposition, I would expect you to refer to deriving: for all x, P(x) is true Granted the upside-down A is hard to do in plain text, but typing in "& forall;" (without the space or quotes) and previewing twice does it: ∀ (at least in the Opera browser). But since you have what appears to be ∃ (& exist;) as a forwards E and ^ as a negative, I get: there is no x such that P(x) is true This is identical to for all x, ^P(x) But usually one does not express things in the negative: Newton's third law is not "there is no action that has an unequal or non-opposite reaction," even though it is logically identical. Thus I will assume you mean "for all x, P(x)." In plain English, I would expect this means that induction introduces a new universal proposition P that cannot be deduced from the existing known propositions. (If it can be so deduced, you get the flaw I referred to earlier, in that mathematical induction is not introducing new propositions, but extracting truths that derive from pre-existing propositions.) I could speculate on what this means, but several different interpretations I made all yielded contradictions, so I will just have to let you tell me. (I know AV^A is intended to be a tautology because it must be "true OR false" or "false OR true.") I am coming to believe this. I should probably get Dr. Peikoff's lectures on this to find out more on how "induction is measurement omission applied to causality." Addendum: I get blocks instead of the "forall" and "exists" symbols in Internet Explorer, which I expect many forum members use. Sorry.
  12. I believe that is his fundamental objection, yes: that in perceiving A do some action B, one only sees what could have been one way--one does not perceive any "necessity" about it anywhere. Thus, since things can be some way without necessarily being that way, one cannot say how they will be in the next instant. Thus he really attacked causality, like you say, and not induction per se.
  13. But "testing it for every single n" is exactly what a proof by induction does not do. That would mean, for the claim that 1 + 3 + 5 + ... + (2n-1) = n^2, you would have to compute the sum of all odd integers from 1 to 2n-1 and see that the computation was equal to n^2, for every single n. In a proof by induction, you do this just once, for the base case. The equation is shown to be true for every n, but it is not tested for each n. I hate to contradict you again, but this is what science does when it uses induction--only it no more looks at "each bird" than a mathematician evaluates the formula above for "each n." How do we know that man is mortal? Certainly not by looking at every single man to see if it has the same characteristics as the last man, then the next, then the next...
  14. Boy, there's nothing like posting something to a public forum for making you questions your assumptions. In particular, both my examples were the application of wider principles to a specific context (algebraic manipulation in one and Newtonian mechanics in the other). Thus I can see how they could be called deductive, rather than inductive. The truly inductive conclusions would be, say, the Newtonian mechanics in the first place. I still welcome feedback, but I will have to think more about this.
  15. You know, that's something I've been wondering about lately. I used to be bothered by the term "induction" used for the mathematical method of proof that consists of the following steps: 1. Show something is true for a base case (say, n = A). 2. Show that if something is true for n, then it is also true for n+1. 3. By extension, since it is true for A, A+1, A+1+1, etc, it is true for all n >= A. But the more I think about it, the more I think there is fair cause for calling that induction. Specifically, the essential aspect of the proof is step 2: showing that the truth for n+1 follows directly from the truth of n and the mathematical properties of numbers. And, in the end, we have a wide claim for all n that has only been directly observed for some n. This means that mathematical induction is ladder-like, in that there is a starting point directly observed and a "link" in a causal chain. Not all induction has this form, so mathematical induction is a subset of induction in general. Furthermore, some inductive conclusions seem to have this mathematical form. For instance, the (proper) claim that the sun will rise every 24 hours would take the "base case" that the Earth is observed to have a certain rotational velocity (by measuring the time elapsed from noon to noon). Then we show that, by the laws of mechanics, rotation will continue unabated since there is no external force acting to slow down the rotation. Thus, we know that the sun will appear to rise every day in the future. In a sense, the laws of mechanics fill in for #2, that is, if there is a celestial body with a given rotational velocity and orientation, then N hours later it will be in the same state again. Direct observation of the celestial positions today and tomorrow gives us #1, and it is then a deductive inference that #3 will hold. (Naturally, there is actually some slowing down that occurs, so you would have to speak of measurable deviation from the current length of "one day.") Although I am pretty convinced of the validity of this view, I definitely welcome comments or criticism. I don't want to commit the fallacy of "hasty generalization" myself.
  16. I would answer this by looking at the historical content of the classes. I took one philosophy class in college and it was useful because it was History of Modern Philosophy (which meant Descartes, Locke, Hume and Kant). Regardless of how bad the philosophy is, it is valuable to know what certain philosophers said, what they meant, and what historical context they operated in. Furthermore, real philosophers before the 20th century were trying to solve real problems and were almost always quite intelligent, and thus worth your time to grapple with--even if you end up disagreeing with them. On the other hand, I would unequivocally skip any class (if you can) that amounts to bull sessions where the students themselves debate philosophical topics (for instance, "what is ethical?"). You will not reach any minds in that class with your own arguments, nor will you hear any arguments worth refuting.
  17. The same way you separate the wheat from any chaff--lots of thinking and introspecting. I wasn't trying to claim you could read them once and understand everything. I remember reading the first thread on it over a year ago, and being shocked to read for the first time that induction was simply not counting swans or sunrises and generalizing into the future. It wasn't until I came back to this forum several months ago that the proper approach started to make some sense (the notion of "causality" is key). Sadly, I don't think there is a book (yet) that is better than the combined postings of the OO.net members (some more than others, of course). Well, it is, if you stretch the word "basically" far enough. There are many different ways of "taking observed individual occurrences and making a broad true statement about them," and not all of them are valid logical methods. (The fallacies of "hasty generalization" and "post hoc ergo propter hoc" come to mind, for instance.)
  18. This is a mistaken notion of induction. Have you read all the threads on induction on this forum? (You can search for threads whose titles contain "induction" with the search feature.) I highly suggest those if you are interested in understanding the error in your question. In particular, the argument that induction is method of causality (not counting) has been made several times. If there any specific parts of the argument that you don't understand or disagree with, I'd be happy to elaborate further.
  19. Yes, she says deduction, but right before that she says "drawing inferences," which is induction. Both are essential aspects of reasoning. Also, consider that she is not enumerating strictly separate methods, but poetically stating and restating different aspects to reasoning. (For instance, "reaching conclusions" can be either deduction or induction based on whether you are reaching a wider generalized conclusion or a narrower one about a particular concrete.) Then she summarizes all this by naming it "reasoning" (or "thinking," which is synonymous in this context). The error about "deducing reality," at least as used in an Objectivist context, refers to the error of starting with the axioms and deducing every other piece of knowledge. On this premise of rationalism, you could deduce that man has rights knowing nothing more than that A is A. (This is a very common approach among math-oriented people.) Now it is true that you must rely on the truth that A is A to properly arrive at any conclusion, but that does not mean that that is all you need. You need to look at the facts of any given context to induce conclusions about those contexts. This means in the case of "man has rights," for instance, you need to consider all the relevant facts about man, none of which is implied in "existence exists."
  20. You're right that happiness is not an end-in-itself either. I should have been more clear that my point was only to show by hunterrose's own justification ("it might just be that a person is happy by helping his child") that caring about a child's welfare relied on something more fundamental (namely, happiness). But, as you say, happiness is still not the end-in-itself we are looking for.
  21. But you've given the answer yourself: by saying a person values a child's welfare because it makes that person happy, you've made that person's happiness a more fundamental value than the child's welfare. In other words, the reason the person aids the child is because it makes the person more happy--so the child's welfare is not an end-in-itself.
  22. Ah, that makes sense. I'll try to take a walk around the block before I start my second successive post.
  23. Perhaps I was not clear enough. If I post one post, then take 15 minutes writing the next, how is that not 15 minutes between one "Add Reply" and the next?
  24. Of course. Did anyone say otherwise? Again, of course. This is what I meant by my "damaged spinal cord" example. Damaging one's brain is no different. This doesn't follow. If it did, let me ask you a question: what is the molecular weight of consciousness? What is its reflectivity, or its electrical charge, or its density? Are these meaningful questions about consciousness? Another way to think of this is to imagine a computer, which runs programs on physical hardware. This program interacts with the hardware through strictly physical means (specifically, electrical voltage and current), but does this mean the program is purely physical? If so, what is the physical size (in inches) of a program? What is its temperature? I can tell you those about the hardware that makes up a computer (which corresponds to your body and brain), but it doesn't even make sense to ask those about a computer program (which corresponds, in some ways, to your mind--that is, consciousness).
  25. You have misunderstood my point. I was most emphatically not stating that you should blindly agree with her, nor that disagreement is frowned on (in fact, disagreement--at least temporarily--is the whole point of this forum). Furthermore, I was not stating that you should read AR "ten more times" until you understand it (that would be intrincism--that the words themselves can implant themselves as truth in your mind). My point was that you present yourself as familiar with her works, and yet appear to be unaware of a very basic position she takes (namely, that concepts are integrations of more than one existent). Let me demonstrate: (First, let me point out that this was not Fred's position--his position was that proper nouns are not concepts. But your later writing indicates you understand this distinction.) It's impossible to debate anything with someone who declines any suggestions to (re)read AR's standard work on a topic, then asks if some notion (which, it turns out, she states on page 10 of ITOE--six pages into the work) is the "official Objectivist position." How can we judge when you are unaware of her position or disagreeing with her? How do we know if the answer is "read this book" or an attempt to demonstrate the truth of the issue? For that matter, even if we assume you are disagreeing with her and proceed to answer your question, how do we know which part you disagree with? Lest you think this is just an issue with proper nouns, the same problem crops up when you ask whether you can have a concept without giving it a name. In the second edition of ITOE, the appendix gives 10 pages of discussion on this exact question. If you do not know about that section, the best answer I could give is "read the chapter 'The Role of Words,' pp 163-174," because there's little I could say as a general answer that would be any better than that discussion. On the other hand, if you have read it and then disagree with some point, I would expect you would say something along the lines of, "AR says on page X that Y is true" and follow up with either, "I don't understand this. Why?" or "This is wrong, because Z." Either lets us know you have already read the relevant section, so that we know your context of questioning. On a personal note: you strike me as a very honest, active-minded person. Your questions and understanding are much more advanced than those of most people that we tend to get here. (In fact, you correctly pointed out a flaw in my argument in my first post to this thread.) By far the most profitable use of your time would be to study ITOE first when you have some question on these issues, and then come here for follow-up debate or critique. Again, I am not suggesting you cannot disagree. It's just that the question that opened the thread (what measurements are omitted in 'inch'?) was a much better question for debate on this forum because AR did not give a definitive well-explained answer on it, and I for one have had problems seeing the answer. I hope this made it clear that I welcome those who "challenge her position" or "understand her but not fully."
×
×
  • Create New...