Jump to content
Objectivism Online Forum

dougclayton

Regulars
  • Posts

    152
  • Joined

  • Last visited

Contact Methods

  • ICQ
    0
  • Website URL
    http://

Previous Fields

  • Sexual orientation
    No Answer
  • Relationship status
    No Answer
  • State (US/Canadian)
    Not Specified
  • Country
    Not Specified
  • Copyright
    Copyrighted
  • Real Name
    Douglas Clayton
  • Occupation
    Computer Programmer

dougclayton's Achievements

Member

Member (4/7)

0

Reputation

  1. You are right that everyone is better off living in a free capitalist country, but it is not true that "improving society" is the altruist point of view. If that were the case, they would have adopted capitalism a long time ago. I know that's what they pay lip service to, but that isn't what they are actually after.
  2. Yes, that is what the stolen concept fallacy is. That is why Rand called it "assuming that which you are attempting to disprove." Also, I can't speak for Dave, but I believe that he means (my words inserted): In other words, in reductio ad absurdum, you start with the claim and show how it logically contradicts an independently known truth. In the stolen concept fallacy, you rely on concepts derived from the stolen concept as a means of denying it. There is no "assume that..." step, and no "but this contradicts what we already know" conclusion. To take the classic example of "property is theft," one reductio ad absurdum argument against property might be the following: 1. Assume property (that is, exclusive use of some material good for an individual) is good. 2. Eventually one person will own everything and let everyone else starve, which must be good. 3. The human race will become extinct when he dies at some point due to old age, which must be good. 4. But human extinction is bad, so the initial assumption (that property is good) is wrong. As threadbare and false as this argument is, note still that it moves from an assumption of property to a contradiction of a truth that is not about property. Consider instead the opening argument of Proudhon's treatise, What Is Property? An Inquiry into the Principle of Right and of Government (http://dhm.best.vwh.net/archives/proudhon-ch1.html): (By "property," he is actually referring to "unused land that you rent out," but his failure to essentialize his concepts properly does not change the fundamental error.) For this to be reductio ad absurdum, you'd have to start with "property is good" and then show that leads to a conclusion independently known to be false. Thus if this were, his absurdity would have to be "and the robbery that this transformation yields is good," but this is not independently known to be false. Furthermore, it cannot be independently known to be false, because in fact the concept "robbery" is dependent on "property"--and ignoring this strict dependence is exactly the fallacy. Note too that he does not commit the fallacy in his claim about slavery, which can be seen just by the fact that murder is not a concept based on slavery. (To match his second claim, his first would have to be, "Life is murder.") Is that at all useful in illustrating the difference?
  3. That's funny, that was going to be my question. One warning: if you are planning to get statistical-grade results, and your features suggest you are, forget about Rnd (and pretty much any other standard VB functions). They're terrible.
  4. You know, I'd wager that Jennifer knows how to compute the percentage change in some variable. I don't think that's what she was asking.
  5. While I'm glad to see you do plan to develop one single feature at a time, it seems as if you have misunderstood parts of my post. I wasn't intending to offend. I was happy to reply, otherwise I wouldn't have, either. I like talking about things like this, particularly to someone as excited as you. I said critiquing your specific feature list would be a waste of time because 1) you won't get to most of those features for a long time if you are truly developing incrementally, so criticism would be premature, and 2) I'm not an author anyway, so my opinion on the relevance to writing a book is nearly worthless. But I do have relevant opinions on the development approach implied by that huge list, so that's what I focused on. Furthermore, I said it would be inappropriate to this forum because it by itself is not related to the stated purpose of this forum, which is "trade [about] information about Objectivism and discussion about its applications." Thus I posted only because I could apply Objectivist principles to software development. I have been corrected and "uncorrected" about this before, so I am not sure what is and isn't allowed. I hope that removes any hint I might have given that my intent was to argue against your project or your discussing it here. You may have interpreted the "Notepad" comment as a criticism or dismissal, but it was neither. The plain truth of the matter is that you will need all of Notepad's features (basic text editing, find/replace, word wrap) before you can add any author-specific features, because writing a book is, at its lowest level, writing text. That is the reason I recommended shamelessly striving for a trivial text editor (colloquially referred to as "Notepad") as version 0.1--it wasn't because all you will ever do is "just another Notepad." To misquote Francisco: "It is against the sin of overly large expectations that I wanted to warn you." Whether it takes you 1 day or 1 month to get a simple text editor depends on your skill level, but the fact that you have to have a small working program before you have a large working program does not. Maybe you know better, but you'd be surprised how many newcomers think they can somehow skip making a fully tested but "boring" simple program because they are so eager to get to the "fun stuff" like all the features that will set their program apart. Then you have not understood the principle of Hello World. It's so pervasive because everyone thinks that making a program say "Hello world" is too small to bother with, before they've written it. (It is also partly an aspect of a foregone era, in which compiling and linking C was more difficult than today's IDEs.) The principle of it is that one should write the smallest possible thing that compiles as one's first attempt at a new language or platform. (The concrete example of actually printing a single line of "Hello world" just happens to be the smallest detectable success for C-style programs.) If you've coded in C or C++ before, you don't start with Hello World for every new project. But since apparently you haven't, you will at some point write your very first program that you expect to compile and run, and there's no need to make that more complicated than it has to be. In fact, take a look at your tutorial and tell me if they don't start out with something small and trivial so they can focus on the basic build step ("code, compile, test"). Anyhow, good luck with your project. It sounds like you've put a lot of thought into it and are eager to get started.
  6. It would be a waste of our time and inappropriate for this forum for me to critique your feature list, so let me offer some general advice. (My qualification: I have been developing software for over 10 years.) Your feature list, not counting the plugins, would require at least 10 years's development for a single experienced developer. Since I don't think you want to wait that long before you have something useful, I'd like to suggest a different approach than laying out all the features you want: Incremental development Figure out what is the absolute bare minimum you need to develop to make something usable by you alone. (No one else will want to use it at this stage.) Now take that bare minimum and throw out the least important 75% of it. If you do this right, this will be so small it will hardly seem worthwhile. (At this point, it won't be much more than Notepad.) The goal is to make a program that you can start using as absolutely soon as possible, so you can begin "eating your own dogfood," as they call it. Then, add in a single feature at a time (namely, the one you most personally wish you had, given your use of this program to write your book), always keeping the program fully functional. To tie this in to philosophy: most methods of software development suffer from ivory-tower rationalism, the error that proper principles (specifically, proper software organization) can be developed without any experience (specifically, any actual programming). This leads to the "waterfall model" (http://en.wikipedia.org/wiki/Waterfall_model) or "big design up front." Of course, naturally the opponents (the "extreme programmers") tend to suffer from empiricism to some degree, the notion that all design (i.e., principles) is worthless and you should code without any planning. Your best bet is an objective method: a method that starts with observation of the facts, then moves to generalization to abstract principles and application of those principles to new situations. Applying that principle to software development leads to starting with first-level concepts, so to speak: getting a "hello world" program (http://en.wikipedia.org/wiki/Hello_world_program) to work. Observing what works and what doesn't work will lead you to further principles you can apply (such as the "principle of least surprise," abstraction, encapsulation, etc). You can then apply a principle you induced from one set of concretes (like "information hiding," http://en.wikipedia.org/wiki/Information_hiding) to new concretes, saving you from making the same errors over and over again. Contrary to empiricism, you can develop principles that will help you plan your next project. But, contrary to rationalism, you won't be able to know what are proper principles and what are improper until they are tested in the real world, so to speak. Incremental development is the best way to ensure your success.
  7. It states an aspect of reality that is implicit in the nature of being conscious. Its utility is therefore in making this heretofore implicit truth explicit, so you can deliberately avoid contradicting it. And although it is implicit in every statement ever made by anyone, it nonetheless deserves an explicit formulation, which cannot be condemned because it is "obvious." In truth, I am not sure what your point is. You don't seem to be saying that nothing exists, so why are you so convinced that "something exists" is meaningless? You imply that the claim that "nothing exists" is wrong, so doesn't that mean the claim that something exists is right?
  8. Could you explain to me what it means to base an argument on a definition that is circular and makes the argument unquestionable?
  9. I suspect you may not know what the term "blank slate" refers to. In trying to explicate to myself what was wrong with your conclusion, I came up with the following analogy. Consider the term "blank film" instead of "blank slate." In saying that we are born "blank film," we mean that the film is unexposed; it has no images or content on it. However, this does not mean that all film is alike. You can get very high quality film that will reproduce images with near-perfect fidelity (which you can use to take pictures of worthy scenes or random garbage, as you wish). You can also get cheap film which will only record grainy, low-resolution images (with which, again, you can choose to capture great scenes or lousy ones). Just as obviously, this distinction between high-quality and low-quality film does not mean that high-quality film comes with pictures already on it. In this analogy, the film represents a baby's mind at birth: with certain properties like intelligence already set at a certain level, but no content whatsoever. That remains for the child to fill in as he chooses what to "take pictures of," so to speak. In short: A baby's mind is as devoid of ideas as a new roll of film is devoid of images.
  10. (Caveat: I am not a professional philosophical historian. I don't even watch them on TV.) Executive summary: he said primarily that we cannot possibly know causation, but it follows pretty quickly for him and his followers that it doesn't exist. It has been over a decade since I read Hume, so I decided to check my claim. I don't have any primary sources by Hume handy, so I will have to do with a web search. From wikipedia, which I am quoting because 1) it tends to be most accurate when it summarizes a school of thought, rather than asserting whether that school is correct, and 2) the author(s) clearly agree with Hume: From the Catholic Encyclopedia: From this and some other reading I did, I would say that he rejects not only our knowledge of necessary causation (epistemologically), but the fact of it as well (metaphysically). Naturally he still speaks of "cause," but to him it means something entirely different than to Aristotelians/Objectivists: he means a mental construct that comes from "constant conjunction" unchanged only for as long as we have observed it, whereas we mean "identity in action" and therefore universal. Thus the chickens come home to roost when it comes to induction: So since induction requires determination of the underlying cause, Hume naturally rejects induction as a consequence of his view of causality. As a bonus, the analytic/synthetic dichotomy rears its ugly head here, in the words "logically necessary." I have to give credit here to Dr. Peikoff's deep but worthwhile essay in ITOE for my understanding of how a denial of identity (and its corollary, causality) is at the heart of the dichotomy.
  11. Well, I have to say, David, that your familiarity with formal logic is both appealing and distracting: it's good to see someone who is clearly knowledgeable enough not to throw out the baby (formal notation and rigor) with the bathwater (the philosophy behind modern formal logic). On the other hand, it can be hard to read to someone whose last use of this was in college a decade ago. So let me try to translate: Given that you talk about a universally quantified proposition, I would expect you to refer to deriving: for all x, P(x) is true Granted the upside-down A is hard to do in plain text, but typing in "& forall;" (without the space or quotes) and previewing twice does it: ∀ (at least in the Opera browser). But since you have what appears to be ∃ (& exist;) as a forwards E and ^ as a negative, I get: there is no x such that P(x) is true This is identical to for all x, ^P(x) But usually one does not express things in the negative: Newton's third law is not "there is no action that has an unequal or non-opposite reaction," even though it is logically identical. Thus I will assume you mean "for all x, P(x)." In plain English, I would expect this means that induction introduces a new universal proposition P that cannot be deduced from the existing known propositions. (If it can be so deduced, you get the flaw I referred to earlier, in that mathematical induction is not introducing new propositions, but extracting truths that derive from pre-existing propositions.) I could speculate on what this means, but several different interpretations I made all yielded contradictions, so I will just have to let you tell me. (I know AV^A is intended to be a tautology because it must be "true OR false" or "false OR true.") I am coming to believe this. I should probably get Dr. Peikoff's lectures on this to find out more on how "induction is measurement omission applied to causality." Addendum: I get blocks instead of the "forall" and "exists" symbols in Internet Explorer, which I expect many forum members use. Sorry.
  12. I believe that is his fundamental objection, yes: that in perceiving A do some action B, one only sees what could have been one way--one does not perceive any "necessity" about it anywhere. Thus, since things can be some way without necessarily being that way, one cannot say how they will be in the next instant. Thus he really attacked causality, like you say, and not induction per se.
  13. But "testing it for every single n" is exactly what a proof by induction does not do. That would mean, for the claim that 1 + 3 + 5 + ... + (2n-1) = n^2, you would have to compute the sum of all odd integers from 1 to 2n-1 and see that the computation was equal to n^2, for every single n. In a proof by induction, you do this just once, for the base case. The equation is shown to be true for every n, but it is not tested for each n. I hate to contradict you again, but this is what science does when it uses induction--only it no more looks at "each bird" than a mathematician evaluates the formula above for "each n." How do we know that man is mortal? Certainly not by looking at every single man to see if it has the same characteristics as the last man, then the next, then the next...
  14. Boy, there's nothing like posting something to a public forum for making you questions your assumptions. In particular, both my examples were the application of wider principles to a specific context (algebraic manipulation in one and Newtonian mechanics in the other). Thus I can see how they could be called deductive, rather than inductive. The truly inductive conclusions would be, say, the Newtonian mechanics in the first place. I still welcome feedback, but I will have to think more about this.
  15. You know, that's something I've been wondering about lately. I used to be bothered by the term "induction" used for the mathematical method of proof that consists of the following steps: 1. Show something is true for a base case (say, n = A). 2. Show that if something is true for n, then it is also true for n+1. 3. By extension, since it is true for A, A+1, A+1+1, etc, it is true for all n >= A. But the more I think about it, the more I think there is fair cause for calling that induction. Specifically, the essential aspect of the proof is step 2: showing that the truth for n+1 follows directly from the truth of n and the mathematical properties of numbers. And, in the end, we have a wide claim for all n that has only been directly observed for some n. This means that mathematical induction is ladder-like, in that there is a starting point directly observed and a "link" in a causal chain. Not all induction has this form, so mathematical induction is a subset of induction in general. Furthermore, some inductive conclusions seem to have this mathematical form. For instance, the (proper) claim that the sun will rise every 24 hours would take the "base case" that the Earth is observed to have a certain rotational velocity (by measuring the time elapsed from noon to noon). Then we show that, by the laws of mechanics, rotation will continue unabated since there is no external force acting to slow down the rotation. Thus, we know that the sun will appear to rise every day in the future. In a sense, the laws of mechanics fill in for #2, that is, if there is a celestial body with a given rotational velocity and orientation, then N hours later it will be in the same state again. Direct observation of the celestial positions today and tomorrow gives us #1, and it is then a deductive inference that #3 will hold. (Naturally, there is actually some slowing down that occurs, so you would have to speak of measurable deviation from the current length of "one day.") Although I am pretty convinced of the validity of this view, I definitely welcome comments or criticism. I don't want to commit the fallacy of "hasty generalization" myself.
×
×
  • Create New...