Jump to content
Objectivism Online Forum

JeffS

Regulars
  • Posts

    512
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by JeffS

  1. Even if we entertain such a ridiculous rebuttal, it still doesn't answer why he's no longer able to control the lava.
  2. His mind created it but is powerless to stop it? Why is his mind sometimes able to create reality, but other times is not? What evidence does he have that the lava was created by his mind? Has he created other things with his mind?
  3. It has always amazed me that people can be so willfully evasive about reality. It honestly scares me. It makes me feel like I'm conversing with lunatics. If they can contradict themselves so readily, and so blatantly, there simply is no end to what they may do. You wrote: "In objective reality, knowledge isn't intrinsic, since the world doesn't somehow magically reach into our craniums to make our estimates of what's objectively the case valid, and since people only have their own subjective estimates of what's objectively the case to go on, their expressed, acted-on interests do indeed sometimes clash." Assuming you're not redefining any of those words, the idea you are conveying with this statement is that knowledge is not something humans are born with. "Knowledge isn't intrinsic;" it isn't "an innate, necessary characteristic" of humans. You then wrote (and have been maintaining from the start): "[Definitions] (1) and (2) are the "bootleg" or "common sense" definitions, sustained by our innate sense of morality inherited from our small-band, hunter-gatherer ancestry." Assuming you're not redefining any of those words, the idea you are conveying with this statement (and your appeal to the authors you listed confirms this) is that knowledge - a moral code, which requires knowledge about reality - is "innate," or "existing at birth." These are contradictory statements, and no amount of spinning is going to get you out of that. So, again, which is it? Are we born with knowledge, or not born with knowledge?
  4. "Intrinsic: innate, inherent, inseparable from the thing itself, essential." Or is this another term you're "redefining?" What do you believe, George? Do you believe we have some inherited knowledge, or do you believe knowledge comes to us magically?
  5. The Native Americans that roamed this land (yes, Louisiana Purchase, but really all of America) were making use of it. Is the difference that they didn't claim it as property?
  6. Whelp, I tried. There's simply nothing I can do about evasion.
  7. Heather (if I may call you Heather), you're absolutely right. Thank you for pointing it out. And, without sounding too obsequious, if I can take this opportunity to tell you how much I enjoy reading your posts. Always very clear and concise.
  8. Well, I've always been sort of fuzzy on this, so I hope you don't mind walking me through it, Steve. I bought my real property from a developer, who presumably bought it from the farmer who once farmed this land. That farmer may have bought the land from some other farmer, but eventually we get to the point where some guy staked out some ground and didn't buy it from anyone. Or, he bought it from the government who bought it from the French, who didn't buy it from anyone. How come that first guy owned the property?
  9. Well, of course I'm asking you, and I expect an answer from your point of view. Do you remember your point of view? It goes something like this: Man has an innate, genetic predisposition to act in such a way that the benefit to others is maximized. From your point of view, you should be considering the benefits to others, at least to the child. Yet you've just been going on about how you feel, what your needs are, what you want. When are you going to consider the kid? Hmmm, interesting. So, you're telling me that I can't be expected to act upon knowledge I don't have? Would that include knowledge about what would benefit nameless and unknown "others"? Ahhh, a new concept. Goody. The "official" definition of "selfishness" is: "concern with one's own interests." It is not, "concern with the immediate gratification of the self." It is not, "willing to do whatever it takes to be number one," or however you want to redefine it. We've already established that it is not in my own interests to not save the child. My "own interests" are to not have the guilt of not saving a human life when all that prevented me from doing so was a nice suit. This was clearly pointed out by Dr. Ghate in the debate, yet Dr. Huemer evaded it. You wish to do the same. You want to accept someone's incorrect understanding as correct understanding. I'm not going to. "Selfishness" has a definition. "Altruism" has a definition. You can either use those words correctly, or choose not to. However, as I pointed out earlier, slip dog from the and is down one but to win. So, in short, "I should act so there is as much positive benefit for myself and for others as I can reasonably ensure. Now, I know a lot about myself - so I know how to ensure positive benefit to myself. I don't know anything about the kid. He appears to be struggling, but he might be just joking, trying to get me to ruin my suit. He might just be practicing treading water. Of course, he might actually be drowning. In such case, I should probably save his life. I know my suit isn't worth as much to me as being free from guilt for letting him drown, but I don't know he's actually drowning. Furthermore, I'm not sure he doesn't want to drown. Maybe his life isn't very valuable to him and the greatest benefit he could receive would be to die. In that case, if I save him, I will have lost a suit, which is a negative, and his life would be saved, which would be a negative. In which case, I will have gone directly contrary to my moral code! So, let me recap: What I know: my suit is valuable to me, but not more valuable than living guilt free. What I don't know: whether the boy is drowning, or whether he wants to be saved if he is drowning. Maximum benefit for all concerned based upon what I know: save my suit." Wow, really sounds like your moral code helped you come to a decision there. Sorry, still an arbitrary, convoluted mess. Really? There's no reason you should prefer your life over mine? In other words, "You are nothing, your people is everything." Then what good is your morality? Why can't Crusoe simply do whatever he wants, and damn the consequences to others? There's no benefit to him in regarding others, and there's no disadvantage when he doesn't.
  10. Okay, the Objectivism would disagree with Eminent Domain?
  11. Me, me, me, me, me! What about the kid? Where did you consider his interests? When did you consider what would be beneficial to him? Maybe he wants to be in the water. Maybe he's practicing holding his breath. Maybe he wants to die. And what about the effects he might wrought if he's saved? Maybe he is a little Hitler and will one day grow up to kill a whole bunch of people. That certainly wouldn't be beneficial to all involved, would it? It depends upon what's in my rational self-interests. See, I couldn't live with myself if I let the kid die just because I didn't want to ruin my suit (as Dr. Huemer first posits). So, it would be against my self-interest to let him die. However, if I've got my own sick kid in my arms, and I'm rushing him to the hospital, and time is of the essence - he'll die if he doesn't get attention soon - then let the other bugger drown. And I'll live quite contentedly with myself and the son I saved. (A scenario Dr. Ghate actually brings up.) How 'bout you? Would you let someone you cared a great deal for die so you could save the life of a stranger? If your answer doesn't begin with, "Well, it depends upon what would be best for all involved," then you're really just blowing smoke, and even you don't believe in your morality. Look real carefully at these two phrases. But it's telling me to consider others before myself. I'm genetically predisposed to be "nice," to consider the effect of my actions on others, to act not for myself but for others. I don't have an innate, genetic predisposition to be selfish, to regard myself before others. Right? So, why would my decision depend upon my sense of life? I'm merely supposed to act in ways which provide the greatest benefit to all. In such a situation, clearly I would sacrifice my life. I must since I'm old, and the child is young. The greatest benefit for all involved would be to save him at the cost of my own life. And why should he do this? What are the disadvantages to acting in such a way that the effects of his actions that impinge on others are not beneficial, or actually harmful?
  12. I get out quite a bit. I'm not sure what that has to do with reading books, though. I assume you've read the books? Did any of them present any evidence? Do you remember that evidence? If so, can you present it? So, an action which is innate cannot be considered innate because the actor doesn't have a concept for "self"? Do you have any examples of innate human behavior which don't activate until the right environmental conditions trigger them? So, when I can follow my morality, I should follow it always? Hmmm, well, that's interesting. Can I conceive of "day" to mean "night"? That could be pretty fun. It would make communication a little difficult, but hey, let's just re-conceive "communication," too! What concept did Rand revalue/redefine? Perhaps you feel that way because we're having trouble communicating? See, when you enter into a debate, or conversation with anyone, having a common language is imperative so that we may convey to each other the concepts we're thinking of. Since you insist on redefining so many words, I'm not sure what concepts you're talking about. In order to clarify what those concepts are, I need to ask you questions. I'm sorry you feel that this is "nit-picky," but I'm afraid I don't know any other way of communicating. Do you? Flip down the upswing on the clockwatch banana ring? If to the and were, did clip the zucchini. I don't know. Are you sticking with the traditional definitions of these words, or have you redefined any? We could probably cut through a lot of posts if you answer a few questions of mine you skipped over (which were many): You argued you couldn't live with yourself if you allowed the kid from Dr. Huemer's emergency scenario to drown. Why did you write that? Why are you so fixated on yourself and your interests? In regards to the same example, you stated that whether I should give up my life to save the child depends upon my own sense of life (and the concrete circumstances of the case.) Again, why would whether I give up my own life depend upon my interests when I have an innate, genetic predisposition(?) to consider the child's interests and act to effect the greatest benefit to both? What does your morality tell Crusoe to do when he returns to civilization?
  13. You can't cite a few? The web's a wonderful tool. If you give me a concrete example, I could do some research and find out where these "researchers" are getting it wrong. Babies aren't conscious? Actually, it's central to your point. If there is some innate knowledge we possess, the first place we should look for such knowledge is in babies. You don't make a distinction between "ability to gain knowledge" and "possession of knowledge"? You wrote: "No, you ought to be (almost) always both altruistic and selfish." (emph. added) So, I don't need to follow my moral code "always," I merely need to follow it almost always. Right? Furthermore, since altruism and selfishness are opposite concepts, it would be impossible for me be both at any time, much less all the time, or even almost all the time. Ahh, then why the "almost"? Is that also a mis-understood concept? I should take actions which are in my rational self-interest. Those which serve my life and my happiness. I should take no actions which disregard my rational self-interest. None which detract from my life and my happiness. What do you recommend? Should I pursue actions which detract from my life and my rational self-interest? How could it be "up to [me] and [my] sense of life"? Aren't I supposed to do that which benefits others? There is no "me" in "them." You couldn't live with yourself? I don't understand why you're so fixated on yourself and your interests. Well, we're in luck! The concept "sacrifice" has already been defined precisely. Do you know what that definition is? I'll guarantee you that Dr. Huemer knows what it is because it's been around since, literally, the dawn of Man. Now, Dr. Huemer might want to redefine it, as you're trying to do, but notice the confusion that results when we try to change the definition of words which already represent valid concepts. Hmmm, well, English uses the conjunction "and" to "add" one clause or phrase to another. Regardless, what do your ethics tell Crusoe to do when he confronts civilization? Yes, people can be wrong and misguided. Should we then take their incorrect understanding of a concept to be correct then? They are correct because they are incorrect? A vague concept is no concept at all. Either it has an identity, or it does not. If it does not, it does not exist. Altruism is not a vague concept. I know what it is. I would venture to say everyone on this board knows what it is. I know Dr. Ghate knows what it is, and I would bet Dr. Huemer does too. It seems the only one fuzzy on what it means, in this context, is you. So, you already have a perfectly good name for your concept, why are you trying to confuse the name for another? Depending on the circumstances? Hmmm, so words change their definitions based upon circumstances? I truly hope your current circumstances are optimal for understanding the words I'm writing sufficient to convey my meaning. "Altruism refers to behaviour by an individual that increases the fitness of another individual while decreasing the fitness of the actor." Why would it be foolishness? Is it foolishness to follow one's moral code? Or, perhaps you're redefining the word "foolishness," too? Besides, wouldn't it be mutualism, or self-sacrifice?
  14. What's your evidence for this? Actually, that babies are ego-centric is not contested at all. There is literally reams of data supporting it. I would like to see your evidence that they are not, or that they need to be taught to have their own perspective. I haven't seen any babies who need to learn to see, feel, hear, taste, or smell. I've not read of any who need to learn to eat, or learn to cry when there is something wrong with them. If that doesn't show regard for self above all others, I don't know what does. Regardless, since you argue regard for others is genetic, how come children need to be taught it at all? So, I should only follow my moral code as much as possible? I don't need to follow it all the time? What action could I take which would not have a de facto effect on myself and others? Depends upon the child. If he just killed someone, for example, I might jump in and speed along the process. So, should I save the child even if it would cost me my life? Well, maybe that's the problem, George. We can't really discuss a concept unless you know what it is, can we? What I quoted. You wrote that Crusoe's selfishness works on a deserted island, used the conjunction "but" to indicate an exception to this thought, and followed it with "when the good Scotsman gets back to civilisation, he will find he encounters people." You ask what ethics tells us regarding how he should relate to those people. Objectivism is very clear on this point. Since you're arguing against Objectivism, I inferred you would argue Crusoe should sacrifice his wants and needs to those un-named others. Is this not what you meant to imply? If not, then what should Crusoe do? What does the morality of "regard for others" tell us? So, let me see if I got this straight. A concept is given a name and a definition. That name and definition are agreed to relate to that concept, but that's only what people think the concept is supposed to be? That logically, the concept isn't what everyone thinks it is, it's what you think it is? Interesting. Actually, other-benefiting acts that are not sacrificial are called mutualism. Do you have any evidence to support your use of the concept altruism to mean mutualism? What do you call behavior by an individual that increases the fitness of another individual while decreasing the fitness of the actor? What do you call the concept of someone trading something of high value for something of low value?
  15. Where is "there"? What is the locus of these "intuitions?" If a baby is born with intuitions, how come empirical tests demonstrate babies are extremely ego-centric for the first few years of their lives and need to be taught that others sense the world through their own perspectives? So, some times I should be altruistic, and sometimes I should be selfish? What determines when I should be one or the other? Should I save the child even if it would cost me my life? How do you define "sacrifice"? Do you agree with Rand in that sacrifice denotes giving up a higher value for a lesser value? What should Crusoe do then? Ditch his productivity and creativity so that he may be of service to whomever wishes to use him? Here are various definitions for "altruism": "unselfish concern for the welfare of others" "individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest." "opposed to egoism or selfishness" "Regardful of others; beneficent; unselfish; -- opposed to egoistic or selfish" " individuals give primary consideration to the interests and welfare of other individuals, members of groups or the community as a whole." Notice any common threads?
  16. Exactly. If this passes the legislature, we still have the possibility that the SCOTUS will at least recognize the government has no power to make us buy anything.
  17. Here's just a suggestion for you: Dear Walmart Customer Care Person, After further reflection, I find myself agreeing whole-heartedly with you. Just last week I was discussing the family finances with my husband and we both came to the realization that the present system is not sustainable and the status quo is not an option. What with both of us working 1/3 of the year just to pay taxes, it's getting harder and harder to feed and clothe the dearth of welfare recipients, let alone our family. It heartens me to read you believe in shared responsibility. In that spirit, my husband and I have decided to share our responsibility for feeding ourselves and our children with you. Starting tomorrow, we'll fill up our shopping cart at the local Walmart and proceed straight to our car. Please ensure your new corporate policy of shared responsibility is communicated system-wide as some of your managers may still consider taking other people's property without an exchange of value to be theft. Sincerely, your partner in my responsibilities, K-Mac
  18. I'm not sure I understand what's going on. Her school schedule was too full? There were other classes she wanted to take more than the Calculus? Did she not take a math class? What does "approved" entail? Did the principal and Math chair claim this class would be treated just like any other class? Why did it need to be approved? But here: Make it sound as if she did receive credit and the class appears on her transcript. Was this done only after your daughter brought it to the principal's attention? My initial reaction is: Your daughter should have made sure the class would be treated just like any class taken at the high school. You've chosen a particular form of schooling, and you need to abide by the rules of that system. While this: is clearly subjective, your daughter knew, or should've known that going in. I don't see what the big deal is, really. Your daughter knows she did the class, knows she did well in the class, and knows the class will help in the future. It's on her school transcripts, so colleges will know how well she did in the class. Why does she care about how she's placed among those unwilling to do the extra work she did? Especially given the fact that this one class could not have had that much of an impact on her GPA.
  19. It wouldn't know what's in its environment. It would only analyze, and report, what its sensors measure. If I give a baby a picture of a dog, and a picture of a cat, then parade a dog past him, the baby will probably indicate the picture of the dog (point to it, pick it up, etc.). Is this "knowing?" I can scan a picture of a person into a computer, the computer can then run through its entire database of pictures and find one that matches. Is this "knowing?" Is it consciousness? The problem is in bringing all that data together into a discrete unit. The robot may use light sensors to capture light reflecting off an object, and check that light against its database to find a match. If it could speak it might say, "Looks like a dog carpet table wall window." Other sensors might capture a sampling of the various chemicals in the air. If it could speak it might say, "Smells like oxygen, nitrogen, methane, carbon dioxide, freon, ... " and continue listing chemicals, probably thousands more. Without going through the other senses, you can already see the difficulty. How is the robot to tie all these different measurements together into a single entity? There is where the difficulty lies. For the robot, all the data it captures is separate and distinct - the data collected through its light capturing device has no relationship to the data collected through its chemical capturing device. Currently, robots are unable to pick out the data which is related - they are unable to separate "dog" from "carpet table wall window," and then put that together with "carbon dioxide" and whatever other chemical might indicate a dog, plus whatever sounds might indicate a dog, plus whatever tactile data might indicate a dog. Our senses do basically the same thing. They capture a lot of data, and they do so constantly. Somehow we're able to integrate these literally millions of different data points into discrete units. We're able to make the connection between all the different data which would indicate a dog, and ignore all the different data which does not. Robotic scientists are finding it difficult to replicate this. I'm not sure why you put this with the previous quote, since we weren't talking about goals then. However, it made me think of another problem with such a robot: it's not gone through any type of concept formation, all its concepts have simply been given to it. Apart from the problem of integrating all the different data points into a single discrete unit, you also have the problem of Conceptual Common Denominators. Even if we assume the robot could discriminate between an object and its surroundings, it's still merely taking measurements, but often times measurements are ignored in the identification of an entity. For example, the data available to the robot may be the following: temperature of object - 38C, height - 3ft, color - brown, length - 7ft, etc. Now, what has it described? Those measurements could be the measurements of a very large dog, or a very small horse. What makes it a dog? In order to get past that, the robot would have to form its own concepts - it would have to learn through experience what makes a dog a dog, and what makes a horse a horse. It needs a hierarchy of concepts. It needs to have established Conceptual Common Denominators for each concept, which only comes through concept formation. The paradox lies in the fact that it needs to ignore measurements when measurement is all it can do. I don't know. You would have to ask one of the Objectivism scholars in here. What I do know is that it is entirely possible for a conscious human to do it. Volition, if no where else, is indicated here by our ability to choose to work against what we purport to want, and even need. A robot is incapable of doing that. Well, I'm not sure what you mean, but you haven't answered the point. A robot can have no volition if its actions are dependent upon programming.
  20. The "integrate them into a structure" part of your position is the sticking point, and far more complicated than your conjunction implies. We may very well advance to the stage of a single hyper-alert machine able to tell us all the qualia of its surroundings, but that no more represents consciousness than several different machines each reporting its respective measurements. A camera does a good job of capturing light, and we could hook one up to a computer which could cross-match the shapes that captured light represents, but that doesn't mean we've created a consciousness that sees. No, I haven't necessarily replaced one goal with another if I choose to not act toward any particular goal - I've merely chosen not to act toward a particular goal. Nothing need replace it. For example, I can sit on my couch, eat Bon-bons and watch Leave it to Beaver re-runs all while having the goal of becoming rich and famous. Chances are my goal won't be realized, but that's a result of my volition - my choice to not act toward that goal. People do it all the time, and if you need evidence just go to an Obama rally. Contrast that with a programmed goal setter, which is, first and foremost, programmed to follow its programming. Regardless of whether it changes its goals, it can never change the fact that it is programmed to seek those goals and must follow its programming. There is no volition. It simply can't choose to watch the Beav's antics if its programming requires it to seek wealth and fame, regardless of how many different iterations of goals it has gone, or can possibly go, through. It might re-program itself to watch, but it is still following its programming.
  21. Actually, it's a huge stretch to think it is actually possible. Not just technically, but philosophically as well. Copying intelligent behavior is not evidence of consciousness. Gathering sensory data is not sensing. They're still following their programming. You're never going to be able to get away from that fact. We are not programmed. I can update my goals and yet still choose to not act toward them, and even act against them. Following orders is not volition.
  22. I was watching the Travel Channel late one night and saw a program on the The World. You don't have to go to the link. Basically, it's a ship which travels around the world. You purchase an apartment and live there. The residents own a piece of the ship. They vote on where to go for the next year. They didn't say in the program, and they don't mention anything (that I've found) on the web site about annual fees, but I'm sure there are some. It's one of those "if you have to ask, you can't afford it" things (base minimum net worth is $5M). It costs a lot of money to run a ship, especially that one. However, and they didn't mention this either, I'm sure there are no taxes to be paid. So, basically you have individuals subject to no government (as the term is commonly used), and therefore no government taxation save whatever taxation might arise as a result of certain business transactions. Yet, they are subject to a sort of government and a sort of taxation in the sense that there is a captain of the boat and they have to abide by certain rules; they also have to pay these fees to maintain the boat. How is this different from a land-based government charging "fees" (in the form of taxes) for the right to own land-based property? That is, would it be morally legitimate for a government to exact fees, in the form of taxes, for the right to live in that country just as the owners of apartments on The World must pay fees to live there. I think the answer may lie in the fact that someone built this ship - someone created it and therefore has the right to dispose of it as they wish. They sell it to others, and get to set the rules on who gets to live there. That's different from land which no one created and therefore no one has the right to dispose of (initially). However, Ayn Rand maintained the only moral form of taxation would be a voluntary tax. Wouldn't property tax be a voluntary tax? That is: if you want to live here, you're going to have to pay for the maintenance - you're going to have to pay the fees. "You're free to leave," as liberals are so fond of saying. You've voluntarily chosen to live here, just as residents of The World voluntarily chose to live aboard their ship. As such, you'll have to pay the fees - you'll have to pay the tax.
  23. Dr. Ghate and Dr. Huemer had a debate at CU in Boulder, which I was lucky enough to attend. Someone taped it and you can listen to it here.
  24. Do you have any support for this claim? Because I highly doubt it. Even if you could build the equipment to accumulate light, register sound waves, identify various chemical compounds in the air, and determine the shape and texture of various surfaces as well as our eyes, ears, noses, mouths, and skin do it would still be just that - a bunch of disjointed data with no consciousness to bring it all together, identify it, conceptualize it, name it and define it. And if "it" can't do that, then it will never be in any position to make any sort of decision. If you're holding this up as an example of how a robot is making decisions, then I disagree. The robot isn't making any decision at all - it's merely following its programming, as you point out. If I tell the computer in my car engine to print out my research paper for tomorrow and it just sits there, doing nothing, is it disobeying my orders? Did it choose to disobey me? When the robot goes against its programming, then it might be said to be making a decision, but without a consciousness there I don't think so.
×
×
  • Create New...