Jump to content
Objectivism Online Forum

nanite1018

Regulars
  • Posts

    365
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by nanite1018

  1. I also struggle with the idea that all Transhumanists respect reason and science when you said yourself that many are altruists and Utilitarians. These are not philosophies founded on reason.

    Hope that made sense.. lol

    Well, in my opinion, transumanism is not a full philosophy, and rather denotes certain agreements with a few philosophical positions, and certain values (long healthy lives, technological advance, that sort of thing).

    I've never heard of a religious transhumanist, except maybe for a Buddhist or something. Almost all are atheists/agnostics. They explicitly state that reason and science are the means by which we can understand the world, and the means to achieve human happiness. The book Hotu refers to in his OP actually explicitly states that philosophy has neglected its duty and become mired in irrationality and innanities (a sentiment which seems very popular in transhumanist circles and in Objectivist ones as well).

    Many react by embracing a certain Scientism, where they think science can answer every question, and often are very reductionist and in many cases determinist. However, I see this as an honest error. It is one many many people fall in to in our culture. Richard Dawkins is somewhat utilitarian and is an altruist, but I don't think him an enemy because I think he is basically honest (he is not a transhumanist, but I am just giving a well known example). I think that many such people are quite similar to me (I used to be one a few years ago): fundamentally committed to reason, but mistakenly accepting the assumptions of our culture about reason and where it can be found. This is why I think most are fundamentally committed to reason, even though they are mistaken in ethics (our culture makes it really hard to get ethics right, lol).

  2. I would like to make another point, which is that I find it interesting that the thread on "should we seek immortality" had a resounding "YES!" as the answer (provided we're talking indefinite lifespan, not literal impossible-to-die immortality), and yet here we have people railing against it. Am I missing something? Or is it only this whole calling it "transhumanism" thing that people are hung up on? Here's a link to that thread if you want to review it:

    Should we seek immortality? thread

  3. See Maken, transhumanism doesn't have an ethics, per se. Sophia deems this a problem, which is a conversation we could have. But transhumanists are merely people who support the development and deployment of the technologies discussed so far, advocate the use of reason in all human affairs, and are supporters of the advance of science, seeing it as key to the happiness of mankind. That is all stuff Objectivists agree with (well, except maybe for the technologies, but I think most are unobjectionable). Some have offered visions of transhumanism as a totalized philosophy, but I don't think that is possible, to be honest. There is far too much room for variation within the bounds set, and huge variances among transhumanists themselves. Some are laissez-faire capitalists, a few are socialists, perhaps the majority are mixed-economy types. The majority are utilitarians of one stripe or another, but many are rational egoists (even if they're not Objectivists), and some are more deontological in their bent. Most are altruists of one form or another, but again, many are rational egoists of some sort. Some are determinists, some are compatibilists, some are volitionists (is that the word?). Some think man is inherently flawed, some hate their bodies, etc. Many think man heroic and love their bodies (and still think they can make them better).

    Basically, I'd summarize it like this: transhumanists all have certain elements in common, including a strong respect for reason and science, and are pro-man (as opposed to man-hating environmentalists). They support technologies that extend our lives and improve our health, and also expand (not necessarily all, but most of) our abilities. Beyond this, they differ greatly among themselves. Transhumanists each have their own individual philosophy, which is only partly in agreement with other transhumanists.

    Also, might I advise in the future to quote some portion of my post, rather than the entire thing (unless its relatively short)? It can take up quite a bit of space and doesn't help further the discussion overly much.

    Have you tried investigating transhumanism at all on your own? Has anyone who has participated in this thread, besides me and Hotu?

  4. I disagree that life is self-sustaining. One must make choices in order to live, an evasion of the necessity to make choices cannot lead one to sustain his life.

    If the purpose of one's life is to achieve immortality and to prolong life and that that was the value above all other values, then there definitely is circular issues. Transhumanism, at least the position taken by some in this thread, holds that all values are impossible without life and that, therefore, man must look to prolong his life above all else. This means that the value of "prolonging life" implicitly rises above the Objectivist values that life is an end in itself. This can lead to rights violation and neglecting to enjoy one's life for what it is.

    Correct me if I am wrong, please, I am trying to learn :D

    I am trying to be healthier because I think it will benefit me in three ways: 1) I will be able to do more in the present and 2) I will be able to do more and for a longer period in the future and 3) I will physically feel better doing it. This is, presumably, why everyone wants to be healthy. I am not now, nor have I ever, advocated survival as the only point of life. That wouldn't make sense. I have to survive as a human being, i.e. I have to pursue rational values and produce, etc. You seem to believe that having as one of many goals (granted an important one) living a good long time (and healthily and happily as a requirement of this) as somehow nonsense. Why? Why not eat yourself to death? Or bother taking preventive measures if, for example, one were to find out one was genetically predisposed to Alzheimer's or diabetes or heart disease? Precisely because you want to be able to be around to continue to pursue your values. And because you want to pursue your values, you will, as a result, include as one of your values your health. I don't see why being supportive of big-time improvements in medicine is found to be so ridiculous/horrible to some people.

    As for the charge that anything resembling "immortality" (by this I mean the effective defeat of aging and disease as a cause of death, not some magical immortality attributed to vampires or invincibility or anything akin to such nonsense, and so it would properly be called "indefinite lifespan") will not be possible in my lifetime, I think that is pessimistic. I am quite young, only 20 years old. If I am in good health, I can reasonably expect to live to roundabout 80 or so, giving me 60 years. That's an awful long time. Why, 60 years ago we barely had computers. We didn't know what DNA was. We had only a vague idea of the workings of cells. We had essentially no idea about the causes of aging. Organ transplants were rare at best and extremely risky. The list goes on. We've learned a huge amount about how the cell and body works, and are even now beginning to apply this knowledge to new treatments. So I don't see why my life expectancy couldn't be several decades longer than 80 years. And in those decades new advancements would come. I'm not saying I'm guaranteed to live to 1000, but I think I have a shot if I take good care of myself. In any case, I like living and intend to use whatever science can give me to maintain a healthy life so I can go on doing what I want to do, whether I live to 80, 100, 200, or 1000+.

    Let me repeat something I have said before: No one believes immortality is possible. We will die. We already have an idea when it will be literally impossible for anything to survive- around 10^120 years from now, at the latest, ~30 billlion years at the earliest. So literal immortality, i.e. never dying, is impossible. No one says that dying from accidents can be eliminated either, merely reduced in likelihood. No one even says that disease can be completely eliminated as a cause of death. What transhumanists say is that we will one day be able to live enormously longer than we do currently thanks to the advance of science, and that such a prospect should not only be looked upon with eagerness, but that ideally one should do something to help that day happen sooner, if one values one's life and has a general respect for life. So let's be clear: transhumanists argue for the desirability of an indefinite lifespan, so that, barring accidents, one may live as long as one likes to continue living. They also argue for many other things, but that is one of their big pushes.

    No one has given me a reason why one would NOT want the ability to live as long as one likes, and why one would wish to campaign against the advance of medical technology in order to avoid such a fate (of having that ability). That just seems crazy to me, but is what is necessary if one is to say that transhumanists are bonkers for thinking such a thing to be desirable. I really do think that one's health would be a major concern for Objectivists (not an overriding one, perhaps, but an important one), and so support for the advance of medicine would be something we'd all like to see- and campaign against those who say it is a bad thing. Not saying it takes a separate philosophy, or movement, at all. Just saying that the advance of medicine is almost always a good thing, and maintaining good health would be something important to most rational people.

  5. If the goal is immortality, then it seems to me that the goal is to live for the sake of others who have not yet lived.

    Doesn't sound very compatible with objectivism to me.

    How so? Because I don't see that at all. I don't want to die if I can help it. So I intend to live healthily as long as I am able to be happy and pursue values that are important to me. I don't see how that is in some manner altruistic. How do you come to that conclusion?

    There is a parallel here with the Bioshock scenario, and in particular the mad surgeon early in the game who indulged his quest for perfection and would not stop cutting.

    Well, I can see there are risks (Bioshock has people get superpowers basically, which would be cool if it could actually happen, but at the cost apparently of their minds- which is definitely not cool). Perhaps it isn't the risk of side-effect that you are talking about, but instead you are meaning that if we keep trying to make ourselves better, we'll lose sight of essentials and sort of lose ourselves? I mean, that could be a risk, sure, but so long as one ensures that people have their individual rights protected (so that no one can force these technologies on others), then it is your personal responsibility to protect against it. That is one factor to be included in the calculation of whether or not someone wants to use some particular technological development, not an argument against developing these sorts of enhancement technologies as a whole.

    Was that what you meant (that we might lose ourselves)? I couldn't really tell. Btw, I loved Bioshock, great game. The problem with Rapture was mostly that almost everyone in it was nowhere close to Objectivist (and even Ryan, who seems closest, had some big problems)- no Objectivist would choose to go insane in return for superpowers. Not a good trade.

  6. Ferris, I would not have sex with anyone unless I was in a relationship. And I would not be in a relationship with a devoutly religious (a Christmas and Easter Christian is as far as I could ever imagine going in that direction, though really I'm not particularly interested in anyone who is any more religious than a diest). So I would not have sex with a devoutly religious person, no matter how attractive I might find their body. The "sacrifice for ill people" is a little ill-defined, so I can't really say about that, depends on context.

    Oh, and I am a heterosexual male, btw.

  7. It does not naturally follow from Objectivism that we ought to enter into what they call "post-human phase". In order to grasp what this means we have to look into what makes us human. It is not our heart or a leg right? .... it is our mind, our brain. So lets not beat around the bush with the general use of the term "technology" - what is at the center of trans-humanism is not electronic hearts or livers with which rational people would not have issues and which would not need much defending - but alterations to our brain - which is in a totally different category. Electronic livers do not require separate philosophical movement.

    Let's stop talking about philosophical movements, because it isn't really important. Let's examine the actual positions in "transhumanism" and see whether they are objectionable. Somatic genetic engineering to improve one's own bodies functioning but not change the DNA in the germ line (and so not affect other generations) IS objected to by many religious people. I can't imagine an Objectivist could do so however. Germ line engineering to improve one's offsprings DNA, for example to eliminate disease, to select certain traits like eye color or height, and limited manipulation of statistical predilections (for example changing the risks of alcoholism or violence, or slightly changing intelligence, for example-- this doesn't imply one couldn't become an alcoholic if you change some genes, but rather that one seems to be at a lower risk of it if one does so); all those seem to me unobjectionable. It doesn't do anything but play around with already existent genes in our gene pool, and so isn't in any way making the children "inhuman" or whatever. Vast numbers of people would object to even this much. And then adding genes which may, for example, improve their functioning in the same way that I would do to my own (say, increasing the efficiency of one's lysosomes in destroying waste products in the cells), is also opposed by many. I don't think these should be objectionable based on Objectivist ethics either, though.

    Alterations to one's eyes to, say, be able to add information from other sources doesn't mess with the brain necessarily, and we already have artificial eyes which are wholly inorganic for patients with damaged eyes by functional optic nerves (they're not as good as normal eyes, but may be in the next decade or two). These shouldn't be objected to (and most wouldn't so long as they are used to fix something deemed broken, rather than improve a normal person's eyesight- that would make many people angry).

    Alright, well what about controlling computers with one's mind? We have experiments where this is done with real people already, and they don't seem to have any alterations in personality or anything because of it. People are able to control artificial hands in experiments with their minds, or move cursors around screens with their minds. These mind-to-world interactions should also be unobjectionable.

    So what we are really talking about are such things like my math-chip-in-head proposal, where it actually somehow spits the answer back out. That would take significant work, but might be able to be based on the technologies described above (perhaps by having the answer outputted to the eye, for example, and having input be done manually). That wouldn't be quite what I meant earlier, but it wouldn't be impossible. We could certainly have access to the internet without altering people's minds (through the use of more advanced versions of the technologies for moving cursors around on a screen, and by projecting the information onto the eye either via a contact or directly incorporating it if the eye had been replaced with an electronic device). Some would have problems with even this technology, even though it wouldn't effect one's mind at all.

    Finally, we have major changes which we don't even know if they are possible yet, which seem to be what you are objecting to. However, transhumanists would likely claim that someone that has all the technologies already discussed in this post would qualify for the name "post-human". After all, genetic engineering may end up using an extra chromosome to house all the added genetic material (since it would be safer- less likely to cause cancer), in which case if it had been germ-line engineering the person would technically be a member of a different species (as they would not be able to reproduce naturally with an unmodified human).

    And people ARE opposed to genetic engineering of oneself, as well as the use of any technologies to improve performance beyond what is possible to a normal human (even when such technologies do not actually effect the brain), which I think you would admit wouldn't be a problem at all and indeed would likely be a positive good (and support for their development would be almost mandatory for an Objectivist-- again, so long as the technology doesn't try to change the mind).

    The overconfidence in our ability to accurately predict the consequences of brain alterations is similar to the overconfidence related to central economic planning on a world-wide level. The likelihood of a disaster is very high almost to the point of a guarantee.

    What follows from Objectivism is the fact that if such alterations do not affect my safety and my rights - I would not stop you from doing this to yourself if you wanted to - in the very similar way I would not try to stop you from taking drugs in the privacy of your own home.

    What follows from Objectivism is that I would fight very hard for a law that would prevent anyone from doing such alterations to another person without their consent which would absolutely exclude children until they can make an educated decision for themselves.

    Well, we've been doing brain alterations along the lines of controlling things directly with our brains that are not part of our bodies for over a decade now without a problem. Bigger changes may cause problems, but that's why most people don't volunteer for major medical experiments. I don't see any problem with germ-line engineering, which does effect one's children (after all, when I'm doing it, they aren't human anyways, and I am making them better). Now, invasive changes, like adding in chips to their brains, sure, one shouldn't do that until the age of consent, and I would support a law which forbid that. But let's not miss the forest for the trees. No one is advocating the forcible imposition of things on other people without consent, so why keep bringing it up?

    Your belief that there doesn't need to be a big push to allow the use of technologies that don't involve major invasive changes to the brain is wrong. Most people are dead-set against any use of technology which will make people function better than the normal human range, and many are against any sort of genetic engineering of oneself if it isn't purely to fix an already accepted disease (many anti-aging things aren't actually going to be treating specific diseases, but underlying reasons why the body will become prone to developing them in the first place). Many think the goal of radically extending the human lifespan is abhorrent, even when it involves no use of force, and isn't taken as the aim of one's life. Bring up the prospect of being able to live 1000 years to many people, and they'll shrink back in horror, screaming about overpopulation, or how its unnatural, etc. They are saying that even wanting to live longer than about 100 years is horrible, not even any moral argument about what the aim of one's life is. So there is a LOT of room for a major drive in the culture to support the development of such technologies. Doesn't have to be separate from an Objectivist movement by any means, but it isn't like the use of these sorts of technologies is unopposed in the culture.

    You may be correct about the possible problems from such ideas as "transhumanism" and "libertarianism" and "conservatism" etc., but particularly in the case of transhumanism, the point isn't whether it should be a separate position, but whether or not one should support the development and use of life-extending, performance-enhancing technologies in general (provided they have acceptable side effects). That is the real focus of the discussion, in my opinion. Is such support (provided the need for side effects to be acceptable, to preclude your worries about brain alteration) a natural result of Objectivist ethics? Is there a moral difference between fixing a disease and improving someone's performance beyond the "normal" in Objectivist ethics? Those are the real questions, not all this stuff about grabbing kids and jamming chips into their heads and sending them off to the factories to be worker drones or whatever.

  8. It is this suggestion that involuntary death as a phenomenon is unnecessary that strikes me as pure fantasy. The potential causes of death are pretty much limitless. The desire for immortality has no basis in reality.

    Well, it certainly is desirable to decrease the number of ways one can expect to die involuntarily. Perhaps that definition was a little off. No transhumanist believes one can live literally forever, as at least one day it appears that there will be no useable energy left in the entire universe, and so no life of any kind at all would be possible. The actual goal is "indefinite life span" which means that you will not die due to biological causes. In essence, disease and aging are concurred, and death will only be the result of accidents and suicide (and murder of course). But even our ability to survive accidents could be significantly increased with advances in medical technology. I think one should keep that in mind. The position taken by transhumanists, and which Hotu is saying Objectivists should consider, is that an indefinite life span (death then only resulting from accidents, murder, and suicide) is possible to achieve and desirable.

    I don't see any reason at all why aging and disease can't be essentially eliminated as causes of death (maybe the rare bird will die from them, but the number who do will be vanishingly small). And since it isn't impossible, and no one is saying that life SPAN should be the standard of life (or well, at least I am certainly not saying that), then I don't see why an Objectivist wouldn't think it desirable. Getting old and sick sucks, why would anyone WANT it if it could be helped?

    How it is determined what is and is not in our interest? Who is the "our" - humanity as a collective - an individual?

    Hypothetically, when you have a chip in your brain which makes you calculate numbers faster but makes you less human in terms of other normal human brain functions - is that in "our interest"? Yes? No? How do you judge that? Should some get this chip implanted at birth and be "sentenced/predetermined" to certain jobs and excluded from others at the moment they are born because that will be good for the efficiency of humanity overall? No? Why not?

    Sophia, I am a student of Objectivism/fellow traveler/not-in-disagreement-with-but-only-uncertain-about-small-parts-of-Objectivism-and-agrees-with-the-rest type. I thought it would be obvious that I am answering all your questions just as any Objectivist would answer them. Again, I don't think transhumanism is really a "philosophy", though some have tried to make it such. It's more of a philosophical position (or maybe a small set of them), akin perhaps to libertarianism or conservatism or liberalism in its vagueness and iffyness as epistemologically valid. Regardless, all I am trying to get across is that adopting "transhumanist" positions (i.e. those in support for the use of technology to enhance our abilities in all areas that we can and deem useful to us) is a natural result of the Objectivist ethics.

    I would have to judge whether having any given enhancement would be in my interests based on how it can be expected to effect my pursuit of my own life and happiness. I would judge it based on what its effects actually are, based on experimental data with volunteers (it is extremely unlikely I'd be an early adopter for any of this, but there definitely will be many who would volunteer). It would, in a way, be similar to how one judges whether or not to take psychiatric medication- it effects how your brain works and may cause you some problems in that area but could have huge benefits. It depends on your particular context. I don't see any fundamental moral difference between doing something which fixes my body if it is substandard in some way (not functioning up to "normal" specs, say has disease, or has bad eyesight, etc.) and doing something which enhances my abilities beyond what is normal (like Tiger Woods, who had 20/8 vision after lasik surgery). In both cases, I am simply weighing the possible costs of the course of action and the benefits to my pursuit of my values. "Normal" human ability doesn't have any special moral significance in my view, and so I don't see how either should be more objectionable than the other.

    There are questions about identity and the like, as I mentioned before, but other than those, I really don't see why anyone would be against technologies that cure disease, repair the damage from aging, greatly increase one's ability to heal from injury, or boost one's abilities like strength, agility, the senses, or intelligence (if they have acceptable side effects in the context of the individual's life).

    No one said anything about forcing people into certain jobs, or making everything about Humanity (with a capital "H") or the State or anything like that. I don't know why I have to keep repeating myself, but I am not advocating the initiation of force, EVER. I am trying to make the case that Objectivist ethics would logically lead to support for the development of these technologies. Why in the world would I be advocating their implementation or development in ways that would directly contradict the Objectivist ethics?

  9. Alright, well I don't know exactly what Hotu is advocating, whether or not he thinks survival at any cost is appropriate (I got a little of that sort of thing from his last post, but I'm certainly not willing to conclude that that is what he actually advocates).

    Transhumanism has radical life extension (radical as in the aim is to live healthily for however long one wants to continue to live, be it 10 years, 100, 1000, or longer) as an essential part.

    Other technological developments advocated by many transhumanists include:

    -New senses, or the ability to overlay additional information on top of our present senses (as in my example of being able to overlay schematics on top of the actual motor you are looking at, or being able to see electromagnetic radiation that isn't in the visible spectrum without having to use specialized goggles (perhaps by having that information sent to you by computers), or by being able to access computer resources natively (like having some sort of a mathematical subroutine package connected to your brain, so that you can, say, produce numerical solutions to an equation natively and manipulate them in your mind in a similar manner that one might manipulate a simple algebra equation in one's mind).

    -The ability to change one's appearance at whim. Examples include the ability to display information directly on one's skin, to change one's hair color or eye color by simply thinking about it or something along those lines. Alternatively some desire the ability to have novel bodily forms, like wings or whatever (I don't care for any of this except the information one perhaps).

    -The ability to improve one's memory, perhaps by being able to create a digital repository of all the sensory information that you have, or simply by genetic engineering to improve the functioning of memory. Also, one might expand one's total intelligence somehow as well.

    -The development of molecular manufacturing, with the aim of effectively eliminating scarcity in most goods (as large amounts of processing would not be necessary, such a molecular manufacturing plant would be able to produce anything essentially with no more than perhaps dirt and energy as resources- this seems feasible, at least in the longer term of 50+ years). Development of such devices would be pretty revolutionary, and cause huge economic growth.

    -The development of artificial intelligence- both specialized (basically more advanced versions of the types of things we have already that analyze stock markets and the like) to general AI. One goal here is to create a totally artificial sapient life form (sapience is what people normally mean by sentience- self-aware and reasoning, along the lines of a human consciousness). Not sure about the usefulness of human-level AI, but more specialized artificial intelligences could take over operating factories, and even be able to provide various services like operating stores (like self-checkout machines, but complex enough to not freak out when you don't put something in the baggage area, to be able to look up prices of items, to recognize bananas and the like, etc.). This could lead to a huge economic boom as labor resources are no longer limited to the number of humans, or how long humans can work, etc.

    -The most questionable I think is the desire to "upload" oneself into a computer, so that one might live indefinitely. Essentially, the idea is that one makes a scan of the brain and body in some way, and is able to convert it into a simulation able to be run by a very advanced computer, under the assumption that the simulation will still be the same person as it will (presumably) behave the same as the old person, etc. Alternatively, one might make "backups", so in case of catastrophic accident, one could be brought back only missing a few hours or days of memories (no worse than after a nasty concussion, say). All that has a lot of philosophical and scientific snarls in it, for example issues about volition, identity of a person, etc. and how all that would work. But that is one idea advocated by many (I'm iffy about whether it is even possible to perform such a scan in principle, and whether or not the "backup" version can really be said to be that person).

    Transhumanism, as I said before, is the belief that we should use technology to enhance our abilities, and that this use shouldn't have a limit except that imposed by what will serve our interests. Honestly, the only one I could see an Objectivist advocating against is perhaps mind uploading, or the more radical alterations in appearance (like wings and things like that). In any case, I hope this helps answer whyNOT's objection. Transhumanists generally all advocate all but mind uploading, and many advocate uploading as well (though not an overwhelming majority or anything). Those are the things in common. Many are altruists or utilitarians, or are some type of socialist or mixed economy advocates. A large minority are pro-capitalists and individualists (though, as is common with society in general today, most are not rational egoists, though I think there is a larger proportion in transhumanism than in the culture at large; no idea about how many Objectivist-types though). But the fundamentals of transhumanism I think are fairly well summed up in advocating for the majority of the technological developments described above.

  10. Maken: What, in your mind, is the prime value in Objectivist ethics? "To hold one’s own life as one’s ultimate value, and one’s own happiness as one’s highest purpose are two aspects of the same achievement. Existentially, the activity of pursuing rational goals is the activity of maintaining one’s life; psychologically, its result, reward and concomitant is an emotional state of happiness."- Ayn Rand, "The Objectivist Ethics".

    So either you think Objectivism would say that you should steal the guy's cure for cancer because "living" is the prime value, or you have to agree that having one's life as one's ultimate value and one's happiness as one's highest purpose would (as I think everyone should agree here) demand that you do not steal his cure. If you agree with Objectivism that the initiation of force is wrong, then the answer to your proposal is that no, you shouldn't steal the cure even though it could save your life. Regardless, your question doesn't highlight anything in particular about transhumanism, as transhumanism is not a philosophy unto itself, but a particular position in philosophy advocating the use of technology to expand our abilities and extend our lives for our own benefit. I haven't seen Hotu or myself say that survival at any cost is the goal in either Objectivism or transhumanism (some transhumanists might say that, but I have never heard of one).

    I kind of think of it like this: Objectivists would fall in a "libertarian" place on a hypothetical political spectrum (by this I mean that, at the very least, everyone can agree that non-Objectivists would almost universally classify Objectivists as libertarians, whether or not the concept "libertarian" as well as "conservative" "liberal" and the rest are epistemologically justified, etc.). Similarly, an Objectivist is going to logically take positions in support of the development of any and all medical technologies to extend healthy life (with the obvious and I hope don't-actually-need-to-be-stated-explicitly restrictions against the use of force or fraud to achieve their development or use), as well as those that provide us with enhanced abilities of various sorts, etc. and so would be classified as a transhumanist by pretty much everyone else as a result of their positions. Now one can debate whether "transhumanism" is a valid/useful concept or not, just as one can about the names for various political orientations. But I think we can all agree that the support for the development and deployment of the sorts of technologies transhumanists call for (in the context of a free market absent any and all coercion) is an obvious application of Objectivist principles (whether or not one wants to call oneself a transhumanist).

  11. Some random queries : is it death we fear most, or deterioration of mind and body? before the life-extension becomes an attainable, affordable, fact, is one not merely fantasizing and evading? does lengthening life automatically mean extending value?

    and yes, do you want to live forever?B)

    btw, I get very nervous when I hear any group or person talking about "improving life". I want to say > start with your own, chum, and leave me alone.

    Well, obviously I would never advocate forcing anything on anyone. So your last concern about forcing things on people who don't want it really isn't part of it, at least for me (and really, I think for an overwhelming majority of transhumanists).

    As for my fear: I like living, so I don't want to die (plus I think dying, the process of it, would be most unpleasant). Similarly I hate the idea of the deterioration of my mind and body. Sounds horrible. I view both as a great threat to my life (obviously, haha), and so the emotion I get when I think about situations where there is a great risk of death is anxiety (a form of fear).

    As for fantasizing/evading: I don't think it is. When you look at where we are, we are just beginning to tap into the potential of bioengineering, it is a really new science, really only about forty years old at best, and there have been some big breakthroughs in the last 10 years even. Also looking at the rapid development in nanotechnology, and the enormous pace of advance in computing (plus the fact that both of those are going to both feed into each other and bioengineering in the future) I think it quite likely we will see huge advances in the next 20-30 years. Aubrey de Grey estimates with there is about a 50% chance that we will see the first round of anti-aging type technologies be able to add 30 years to a middle-aged persons life expectancy (someone who is say 55 at the time of treatment) in about 25 years. That is to say, that someone who is 30 now, he thinks has a 50% likelihood of seeing those treatments by the time he is 55, and so could then expect to live past 100 (and by the time he gets back to being roughly the same age in terms of distance from expected death, we will advance them significantly more, and so he thinks such a person will not have to worry about dying from aging). If you are say 50 currently, then you might be able to, in his estimation, make it through to longevity escape velocity as he calls it if you are lucky and very healthy. So I think there is reason to be cautiously optimistic, especially if you are someone who is still quite young (like me, at age 20) and are physically fit.

    Obviously, it isn't a sure thing, but there is at least a meaningful possibility of this happening within our lifetimes. And with that, I really don't think it is evading to think about such things.

    A longer life isn't always a better one. But it could be, and not living certainly isn't going to let me have any more values so I'd like the option at least of living as long as I like.

  12. Let us look at an actual description of transhumanism, and see if people really find it objectionable:

    "Transhumanism is an international intellectual and cultural movement supporting the use of science and technology to improve human mental and physical characteristics and capacities. The movement regards aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death as unnecessary and undesirable. Transhumanists look to biotechnologies and other emerging technologies for these purposes."--Wikipedia article on Transhumanism.

    What Objectivist doesn't view disability, suffering, disease, aging, and involuntary death as undesirable? For any rational person its a no-brainer. Given the history of technology, I think it is safe to say that disability, disease, aging, and quite possibly involuntary death are unnecessary and that suffering (and definitely involuntary death) can be massively reduced through the advance of technology and industrial society more generally.

    Another quote from wikipedia: "While many transhumanist theorists and advocates seek to apply reason, science and technology for the purposes of reducing poverty, disease, disability, and malnutrition around the globe, transhumanism is distinctive in its particular focus on the applications of technologies to the improvement of human bodies at the individual level. Many transhumanists actively assess the potential for future technologies and innovative social systems to improve the quality of all life, while seeking to make the material reality of the human condition fulfill the promise of legal and political equality by eliminating congenital mental and physical barriers. Transhumanist philosophers argue that there not only exists a perfectionist ethical imperative for humans to strive for progress and improvement of the human condition but that it is possible and desirable for humanity to enter a transhuman phase of existence, in which humans are in control of their own evolution. In such a phase, natural evolution would be replaced with deliberate change."

    Sounds great to me! I don't advocate making radical life extension one's number one priority. Not at all. It isn't for me. I like physics, so I am going to be doing physics (and maybe throwing in some philosophy too). Doesn't really apply all that well to life extension. But understanding the world better will eventually yield practical benefits, and I still haven't decided on which area of physics I want to go into (some are more directly related to the advance of technology than others). I incorporate my goal to live indefinitely into my life by trying to get somewhat physically fit at least, financially supporting such research once I have stable employment at least in a small way, and being an advocate in the culture.

    Really, I think the only complaint an Objectivist could have with transhumanism is the same one Objectivists often levy against libertarianism - it isn't a concept that is justified epistemologically, and that it is too diverse to enable one to adopt that label without endorsing views which are terrible. And honestly, that is a discussion I don't want to have. Luckily, I haven't heard anyone raise it. Some transhumanists are horrific politically (and ethically), some are quite good, and some are Objectivists (to be clear, I am uncertain about one or two parts in Objectivism, and am in agreement with the rest, so I am a student of Objectivism, or Objectivist-esque, or very heavily influenced by Objectivism, or whatever you want to say to denote such a position). But barring the issues that one might raise that are essentially the same as those with libertarianism, I think being a transhumanist in the sense outlined above is pretty much a logical consequence of the Objectivist ethics. To be clear: by this I mean that the desire to use technology to combat disability, disease, suffering, aging, and involuntary death (even applying our technology to change our bodies, as well as using it to extend our abilities beyond what is possible to us today), is a logical consequence of the aim of living as a human being.

  13. Grames, I think you are a little off here. I do make myself better. I am currently trying to change my diet in order to lose weight, become more fit, and physically attractive. I am looking forward to looking in the mirror and seeing the results of my efforts: a body that is fit and healthy rather than very overweight. In a similar manner, if I change my digestive tract to be more efficient, or alternatively make my heart healthier, or repair damage as a consequence of aging, I am doing something in a similar vein.

    The point is that wanting to make your body work better so that you can live longer, healthier, and happier is the whole point of good diet and exerciaing, aa well as all medixal technology. Do you really believe qorking to fix the problems that result from aging through intervention or genetic engineering etc. is a bad thing? Do you think using technology to enhance your abilities is wrong? You seem to be missing the forest for the trees. Or in this case, missing the content because you have problems with the presentation. Warning against certain bad premises which might arise in the discussion is valuable. But that doesn't mean you should denounce the very idea of advocating radical human life extension and related technologies. Why would ypu? If you like living why not live longer? I for one know that I would love to do a lot of things, going to other planets for example. But I will not be able to with my current life span. There are experiments that would be absolutely impossible to do within the next hundred years (probing the Planck energy for example). I don't see why someone in good health with values to pursue would purposely choose to grow old and die if they had the option not to. Science may give us the ability to have that choice. And that, to me, is something worth working and advocating for.

    Also, sorry if there are spelling errors, I typed this on my phone.

  14. I classify myself as a transhumanist, for similar reasons (though not his definition of freedom) that Hotu Matua does. I like living. I want to continue to do so. As of right now, it is biologically impossible without new medical interventions for me to live beyond 130 or so, and more realistically I can expect to live 80 years without new advances in medicine. I don't like that. I want scientists to figure out how to fix the damage being done by my metabolism all the time. By damage, I mean the buildup of waste products and assorted changes in my body at the macro, cellular, and molecular levels which negatively impact me; examples include purely cosmetic concerns like wrinkles (which I don't have since I'm only 20) go more disastrous things like brittle bones, diabetes, Parkinson's, Alzheimer's, heart disease, cancer, arthritis, etc. If we can fix the damage to our bodies that accrues with time simply by being alive, then we can expect to live longer. And the longer we live, the better we'll get at fixing damage (thanks to the progression of knowledge and technology), and so the longer we'll be able to continue to live. The ultimate goal is to get medical science advancing fast enough that I get an extra year of life on my healthy life expectancy for every year I am alive. That way, I stay roughly the same distance away from death, and can continue to do so so long as medical science advances.

    This talk of damage doesn't have any moral quality to it at all. People do not have "original sin", but rather are physical creatures whose bodies act in a way to keep them alive. But the body doesn't maintain life indefinitely, barring catastrophic accidents, and so there is no problem in saying that it can be made better (that is, better at its job- keeping us alive).

    In a similar vein, I think it would be great to be able to access the resources of a computer internally, without having to type and use my hands but rather by controlling it with my mind (or more properly, brain)- rudimentary versions of such technologies already exist and help some disabled patients who, for example, are "locked in" to communicate to the world. One example would be for someone like Stephen Hawking, whose mind is perfectly functional, to communicate in almost real time, rather than having to prepare for hours and hours for a short interview. We can't do it yet, but we are getting there. Eventually the technologies like that should enable me to, for example, overlay information about the world onto my vision and manipulate machines directly. One use for that might be for an engineer to be able to look at a structure and see the stress points directly, through say some sort of color map, or be able to overlay a diagrammatic view of an engine onto the engine itself, things like that. One day, we might even be able to pack little knowledge modules in our heads (I'd love to have Mathematica available 24/7, so I could simply look at an equation on a board and know the integral of it, or be able to instantly make a phase-space diagram of it and test out numerical solutions without having to go to a computer and type what I want in). Such things aren't simply science fiction, one can envision such things based on significant advances in things we're already doing on a limited scale.

    That's really the whole point of transhumanism, the use of technology to make our lives better, and to change ourselves in ways that enhance our abilities, with the ultimate aim of making our lives better. I will grant you, some transhumanists are horrible, and while they are all opposed to a mind body dichotomy in the sense of the supernatural (well, all I've ever seen anyway), some are downright hateful of their body. Others have horrible moral views, mostly utilitarian and altruistic. Some sub-groups in transhumanism are better than others. For example, Extropians are committed to reason, individualism, rational self-interest, and capitalism (I classify myself as an extropian in particular). In any case, I don't see anything wrong with transhumanism as a general idea: the commitment to use technology to expand human abilities and make our lives better (and this is unqualified, including the use of technology to change our own bodies, indeed, its distinguishing characteristic is its insistence that such a thing is desirable). I, too, think it is a view demanded by Objectivist ethics. I think an Objectivist rationally has to be a transhumanist, but not very many transhumanists are going to be Objectivists (honestly, the majority are altruists and whatnot).

  15. After some thought, my way of reconciling all this is to say that, at any level that nature might be said to be deterministic, people and any sort of object anyone ever interacts with does not exist. That is, if subatomic particles were to behave deterministically (which we don't know anyway), this gives us no information or insight into anything that effects us, because even something like a proton simply does not exist at that level (it is made of quarks and gluons). Any approximation of any kind would destroy the determinism. And even determinism does not entail predictability (no matter what, only a finite amount of information, to a finite degree of accuracy, can be learned about any system of any size at all, including the lowly proton or a baseball). We are left without predictability (as our prediction will eventually, and oftentimes quite quickly, diverge from reality) and that the "determinism", if it exists, is forever inapplicable to anything we could deal with in our lives (even protons and neutrons) and which will remain forever beyond demonstration (as it is both inapplicable and impossible to demonstrate with any system of any size, in the long run; we would have to settle for a statistical determinism at best, and this too would likely become simply meaningless as the breadth of possible outcomes grows enormous with time).

    I'm not certain, but I think it likely that a determinism that is in principle inapplicable to anything we can deal with and from which it is impossible to say that the future is determined by present conditions as can be discovered by any method that could ever be invented by any being of any kind is essentially harmless. And since this is true of all determinism, determinism as such should be harmless for those reasons (and we don't even have any reason to think determinism is true anyway at this point). Essentially, I think a free will in the Objectivist sense and this almost totally useless determinism (whose uselessness is necessitated by our experience with chaotic systems and logic, about the nature of gathering information about a system, for example) could be compatible. If one is to reference a person at all, one would have to describe it in terms of choices and free will, and if one were to try to uphold determinism, no people or things or ideas exist and demonstrating the determinism is impossible due to the need to disturb an entity in order to measure it (and thus accuracy will always be limited, even in principle). Essentially, to say that the universe is deterministic would be to discard everything we interact with, all our concepts, even in principle, not by some logical requirement but from within the very science that the determinists hold demands determinism; that in the deterministic description, all our concepts are totally inapplicable and indeed incomprehensible from such a view (as subatomic particles respect no hard boundaries between say a proton and a neutron, or between one cell and another, or between my body and the air, or between the Sun and the Earth, etc.).

    This necessary divide between what we experience and any determinism which someone might wish to say exists (internally from the demands of a deterministic description, not from any epistemological idea about the origin of concepts, but only about whether they can be said to refer in a given context at all, which I think a determinist would agree with), I think shows that any attempt to give any conclusions about our world from the assumption of subatomic particle determinism is misguided and indeed impossible (as a description of any experience will of necessity be nondeterministic).

    I'm not saying determinism is true, at all, but I think some sort of compatibilism might be (with the considerations I gave above). Indeed, perhaps the above argument shows that any assumption of determinism is forever unprovable by the demands of a determinist himself, since prediction to the degree of accuracy necessary is forever impossible (and so a nondeterministic universe can't be disproven from empirical observation).

  16. Grames:

    I am very interested in the topic of the philosophy of mind (and obviously as a part of that the relation between free will and the law of causality, and how free will might actually work). Grames, I have found your posts in the past quite insightful, and I remember the one you reposted above. My question is this: What exactly do you mean by "there will always be a physical explanation of how [the event in question] happened in terms of physical necessity"?

    By this do you mean that the explanation will say "this is what must have happened because that is what happened" in some sense? The laws of physics, for example, are only true because they, to our knowledge, accurately describe the behavior of particles. If we saw a particle behave differently, then the laws of physics will change. By this I mean that we will have to come up with a new way to describe the activity in a formal way that is consistent with what we previously knew and what we now know; similar to how general relativity accurately describes everything we see(well except for the things covered by "dark energy" and "dark matter", which to me are at best hypotheses, not actual things we know exist as so many astrophysicists assert) and is totally indistinguishable to Newton's theory of gravity in the limit of non-rotating bodies in low strength gravitational fields moving at slow velocities (to within relatively high margins of error, for observations up until the 20th century). And so we had no explanation for, say, the precession of the perihelion of Mercury under Newtonian mechanics, but with GR, we have an explanation in terms of physical necessity. And if we found something new which can't be reconciled with GR, then we would then find a new explanation in terms of physical necessity.

    In the case of the brain, then, if the particles of the brain move in certain ways, then that is how they had to move given reality at the time, and we will, I think you are saying, be able to come up with an explanation in terms of physical necessity for their motion? Doesn't that essentially imply the causal closure of the physical? By this I mean that physical things are not effected by mental things (as mental things are, presumably, not physical), and so whatever the mind is, it would be ineffectual in the workings of the world (or is at best an epiphenomenon). Essentially, you are saying that we will be able to describe the behavior of particles in the brain without reference to the mind, is that right? If an action by an entity was necessary, that means to me it could not have been otherwise given the conditions (and so if all physical events are necessary, there is not free will). Or do you mean to say that the explanation is not in a sense true? That it was not in fact necessary in the sense I use it above?

  17. Unsinkable Captain of Spoofs

    As the topic title suggests, I found this article disturbing. Not because its about Leslie Nielsen, who I found could be quite funny (and who I loved in his serious role in "Forbidden Planet", agreat classic science fiction movie). But rather what the author says. It sounds, particularly in the latter half, strikingly Toohey-ish.

    Examples:

    "No, the uniqueness of Leslie Nielsen is inseparable from the nonspecialness of much of his career, his brilliant lack of distinction. "

    "Looking back, it is easy to see that the times required someone like Leslie Nielsen: a handsome silver-haired gentleman of fatherly demeanor willing to commit and submit to any kind of indignity without losing his cool."

    I don't know, it seems wrong on some level. Leslie Nielsen sure was great at deadpan, but I don't understand why someone would put it in such terms. There were a couple other examples in the article, so I suggest you read it (also for context of the above statements, obviously). Does anyone else get a similar impression from this article?

  18. I decided that I needed to write a short but coherent expression of my personal philosophy/Objectivism, in order to help clarify it in my own mind, and so I can better express it to others. I ran into a snag at (haven't gone farther, this is the first snag) at the idea of "truth". When I say "I am certain that X is true" it means that within the context of all knowledge available to me (all my concepts, all sense-data I've ever acquired, etc.) that "X is true" is the only conclusion available to me that doesn't involve either a) rejecting the information from my sense-perception or b ) making an arbitrary assertion (that is, an assertion which I have no evidence for whatsoever). And by "X is true" I mean that "X" genuinely refers to some aspect of reality, correct?

    Now, is that "genuinely refers" contextual? I.e. are there essentially something akin to error bars around my assertion, saying that within a certain context (i.e. size range, speed range, location in space, a certain level of accuracy in measurement) that this statement is correct? So, for example, Newton's law of gravitation was contextually the only conclusion you could come to without either a denial of sense-perception or an arbitrary assertion. However, now we know that it wasn't quite right (i.e. F =/= Gm1m2/r^2 when measured exactly). So was Newton correct or justified in his certainty, but not true? Or was he true once and now he isn't? Or is he still true, and so is Einstein, but in different contexts? And if that is the case, how can they both be said to genuinely refer, when there is only one strictly correct ultimate description of the natures of all the particles that make up everything around us.

    Basically, I am tripping up because there obviously are metaphysical facts about reality, there is an actual absolute, infinitely accurate (I suppose you could say) description of the behavior of things and the nature of things. This is what most people take to be "true". Objectivism says that certainty is contextual, and so presumably so would truth be, but then I have a hard time reconciling that with the correspondence theory of truth that Peikoff says that Objectivism holds in OPAR. Namely, that something is true when that is how it actually is in reality and a conscious entity recognizes that that is the case. It seems like, then, you will end up with a bunch of statements, all true, that say very different things about reality, for example Kepler's Laws, Newton's Law of Gravitation, General Relativity, and whatever will come next after that.

    Hm, or is this how it is:

    1) If you use logic properly, given a certain range of data, there will be only one correct conclusion to make, let's say that the evidence is such that there is only one conclusion which does not involve either a) the rejection of sense perception (that's what it comes down to ultimately that is) or b ) the assertion of something which one has no evidence for whatsoever.

    2) This statement is then "certain" and "true". The truth is contextual however, so it is strictly delimited to the range of evidence that spawned your generalization, i.e. Newton observed planets and comets, so he was in the clear for astronomical objects and the like, but his theory could never have been said to apply in the extreme environments near a white dwarf or a neutron star or a black hole, or objects traveling a thousand times faster than anything he'd ever observed.

    3) That statement, with it's error bars, is said to genuinely refer (that is, it is true) because a) within the precision of the statement (which is derived from how it was reached) it predicts what happens and b ) a proper method was used to reach it, i.e. it is the only conclusion one can come to given the data available when it was formed.

    And so for a valid induction/generalization/statement of fact it will never be contradicted because you have the context of the statement strictly confined by the types of observations you used to reach the conclusion, and within the ranges defined by that context it is accurate, and it is the only conclusion possible for that context that doesn't involve some arbitrary assertion or the rejection of some piece of sensory data at some level.

    Is that correct? It sort of makes sense, but I want to make sure I actually got it right, and there wasn't some error in there, it's a little confusing. I suppose it depends on a particular theory of induction (well, maybe), but I'm working in the basic framework of the Peikoff/Harriman theory. I'd appreciate any feedback on this, as it is obviously a pretty important point to get completely straight in one's head.

    Edit: Removed the B)'s for b ) 's.

  19. My evaluation of Peikoff's statement:

    1. This point is understandable, though it would still, in my understanding of his letter, mean that McCaskey is a bad person/not an Objectivist/severely damaging to the Institute and the Objectivist movement. So it is a denunciation, in any case.

    2. I don't believe McCaskey demanded that exact letter, nor do I understand, even if he did, why Peikoff wouldn't have written a real response immediately and either given it to McCaskey or published it shortly after McCaskey resigned in order to head off obvious criticisms that he should have known were going to come and to actually explain his actions.

    3. He hates McCaskey and won't explain why, and says he is an ignoramus. And that justifies his not dealing with him at all. Well I don't get him calling him an ignoramus. In any case this whole point is kind of pointless.

    4. Not really new or important either. But it is a little weird he didn't say anything before.

    Rest: His high estimation of himself is largely valid, though of late he has made some big errors in applying philosophy (don't have the right to build a mosque anyone? That was pretty off in my opinion). However, he is outside of ARI now. And he should not be involved with it at all. He should not have the ability to throw people off the board because he doesn't like them, nor should he threaten to (apparently, since it's his only power) remove all rights to the works of Ayn Rand, etc. from ARI because they don't go along with what he wants. He should either be officially involved or have no influence on ARI at all, otherwise it gets all mushy and wibbly like it is now, which damages the Institute.

    He doesn't explain how McCaskey actually disagrees with AYN RAND. No such argument has been given, ever. Knowledge develops, our definitions of concepts change, the context of concepts changes as knowledge grows, etc., and so I think McCaskey's whole "spiral" idea could very well be compatible. And there obviously is a process where one does not have a crystal clear concept but one is using a fuzzy concept to try to integrate information in order to spot new connections that can be used to clarify the concept, which I don't see as contradictory to Objectivist epistemology, nor a contradiction of anything Ayn Rand said (at least not that I've read, and I've read almost all of her books like VOS, ITOE, AS, TRM, etc. , none of "The Objectivist" or anything like that though). Plus, "The Logical Leap" is NOT a work of Objectivist cannon, as that was all pubished prior to Rand's death (with the exception of OPAR). And I never saw McCaskey sneer in any context, but simply criticize an academic work (well, work of philosophy along broadly academic lines anyway), which is not at all an issue. If criticism of other Objectivists work is impossible, then the ARI is not an academic organization, period.

    I don't think he is dictatorial (not generally anyway), or an opponent of free speech. I simply think that he does not want ARI to be an academic institution, because an academic institution must be able to criticize work done by people in the Institute. Unless the criticism is clearly contradictory to Objectivism, (which I haven't seen any argument for, certainly not from Peikoff or Harriman) then one can't kick anyone out over a dispute without losing the status of a true academic institution.

    And oh boy, let's go after Craig Biddle and the Hsieh's, Biddle because he criticized Peikoff because Piekoff hasn't ever given an argument for why McCaskey is wrong, and the Hsieh's apparently because they dare to tell people about the various statements of people on all sides (they posted a link to Peikoff's statement too, so it's not like they're being unfair up to this point). Peikoff is, apparently, incapable of making a mistake, nor does he have to explain his actions (that is, exactly how McCaskey ever criticized Objectivism). And so Objectivism is going to be torn apart as a cohesive movement because Peikoff is apparently too weak to remain silent but too strong to actually explain his actions. Fantastic.

  20. How does the occasional buzz, at times when I have made sure I won't be required to work or make important decisions, hurt my life? Are you alleging it permanently affects my mind and rationality?

    It is purposeful obstruction of your capacity to think. Unless you can guarantee there won't be any emergencies or anything like that while you are drinking, then intentionally getting drunk is dangerous and irrational. Since that standard is impossible to reach, one should not get drunk. I'm not saying it is wrong to have a drink, but it is wrong to get drunk (i.e. drink to observable impairment, which one standard drink per hour can't produce unless you weigh like 80 pounds).

    There is a significant difference between having a buzz and what Ayn Rand is talking about in the quote (an unconscious mind, in fact she's probably talking about a permanently unconscious mind). Just because someone has a buzz, that does not make them unconscious or unaware of reality. No one here is saying that drinking oneself into unconsciousness is fine.

    Getting drunk is immoral, because when you are drunk your judgment is impaired. I have never said that having a drink is wrong, as having a single standard drink in one's system produces no noticeable impairment in functioning (again, unless you are, upon actually looking it up 130 lbs for females and about 100 lbs for males). The average male won't have any negative effects if he limits himself to one drink per hour to 1.5 hour period.

    No one can be fully focused at all time, we all need rest beyond just the time we spend sleeping, and he found nothing wrong with people who choose to relax with the help of a substance, because they find it easier and more enjoyable.

    There is a difference between thinking and being focused. Being focused is really about having the capacity to think if the need arises, and having a purpose for your actions at the time. Being drunk (by impairing judgment) thereby inhibits focus. One CAN be focused at all times, as it doesn't preclude taking a break and doing something recreational, so long as one has the ability to think if the need arises and has a purpose for their actions. One cannot think all the time, I agree, which is where recreation comes in. But recreation should not interfere with focus (as it is required for all rational action, and since all action should be rational, is thereby required for all action whatsoever).

    edit: misspellings and a misplaced word or two.

  21. And this is what caught Greenspan by such surprise he admitted to making a mistake in his theories of economics. He trusted these people and other money types to have a clear eyed view of their own long term self interest, but the short term gains of a rising market are a powerful distraction to a country raised through elementary school to be pragmatist. It is not automatic that people are fully informed, rational or do the right thing even on average.

    I agree with you there, but his theory didn't fail. If we had allowed the collapse to happen without intervention, the horrific pain of collapse, mass bankruptcies, unemployment, investment losses, would have caused a dramatic shift in priorities for companies on Wall Street. The bonuses so big that they don't care what happens to their companies, and the extreme short-term viewpoint, woud both have died a well deserved death. By intervening, we set ourselves up for another bigger fall.

  22. This sounds a little like Objectivism (because of the whole focused mind thing), but it isn't. Objectivism advocates for a focused mind as the means of achieving one's goals, making hard choices, creating great things, etc. However, that does not mean it calls for always lying in wait, like a damn deer or some kind of martial arts guru, ready to focus one's mind just in case something happens.

    "When man unfocuses his mind, he may be said to be conscious in a subhuman sense of the word, since he experiences sensations and perceptions. But in the sense of the word applicable to man—in the sense of a consciousness which is aware of reality and able to deal with it, a consciousness able to direct the actions and provide for the survival of a human being—an unfocused mind is not conscious."-VOS.

    "“Focus” designates a quality of one’s mental state, a quality of active alertness. “Focus” means the state of a goal-directed mind committed to attaining full awareness of reality. It’s the state of a mind committed to seeing, to grasping, to understanding, to knowing."-Peikoff, 'Philosophy of Objectivism Lecture', and a similar thing in OPAR. Both taken from the Lexicon article on focus.

    I am quite confident that getting drunk or high will make it difficult or impossible to focus, that is, be capable of applying your mind or having a clear purpose (as well as be able to deal as effectively as possible with reality, which is also vitally important). Rationality requires a clear mind; rationality is the fundamental requirement of a human life, therefore, a clear mind is a fundamental requirement of a human life. As a consequence, getting drunk or high is immoral. By this I mean it hurts your life. So while you might like to feel good or whatever, it represents a breach of Objectivist morality. I don't see how one could justify getting high or drunk recreationally except if they attempted to reduce Objectivist morality to hedonism of some sort.

    Plenty of people (like Miss Rand) have jobs that allow them to not worry about having to perform at full capacity 24/7. They can get off work and make the safe assumption that they can enjoy a good buzz without any negative consequences whatsoever.
    Intentionally fogging up your mind is necessarily dangerous. One should always be able to deal with any situation that might arise, which requires a clear head. Fogging up your mind is necessarily dangerous.

    Smoking pot has never caused me to have even a single hallucination.

    One person said that there was nothing wrong with LSD. That was what I was referring to (I know that alcohol and marijuana do not cause hallucinations).
  23. So are you advocating that no legal drugs should be used that have similar effects to marijuana (which there are many of)? Using a drug for a clear cut rational purpose (e.g. relieving stress) does not cause you to live irrationally.

    If you have a clinical problem with stress, that you need medication in order to be able to resolve, then sure go ahead (in extreme moderation). But if you are simply stressed out occasionally, then one shouldn't turn to a drug, but instead work on improving their own psychology in order to resolve the problem. Using a drug to relieve stress (that is normal, not a medical or clinically psychological problem) is NOT rational. That is what I am saying.

  24. I wouldn't even be asking this question, I'd have broken up with her within 24 hours, but probably immediately. Taking drugs for depression (which appears to be a problem with the brain's functioning) and taking drugs to deal with stress are two different things. Stress has a purpose; if stress is such that you can't function on a regular basis because of it, one should seek clinical help because they have a serious problem. Otherwise, one should be able to reorganize their life and/or think better so as to eliminate or at least negate much of the stress. Using a mind-dulling drug as a crutch to deal with stress (much like using alcohol for this purpose, or to be more sociable at functions) is counter-productive. One would be in a far better position if they tried to work on their social skills, or stress relief skills, or tried to make their life less stressful; that is, if one learned how not to be dependent on a mind-dulling drug. Barring the presence of disease, there is no excuse, in my opinion, to intentionally dull your mind in order to get some sort of feeling.

    This is not saying that you can't drink alcohol. The above argument would say that you shouldn't have more than one standard drink in each 1-1.5 hour period (approximately the time period it takes your liver to process the alcohol). You can drink alcohol for taste, for example, but one should not drink in order to get drunk or "buzzed". That would mean that you are intentionally unfocussing your mind (and making difficult to impossible to reach full focus if needed), something which one ought never to do if they wish to live rationally. I don't see how one could ever enjoy smoking anything (as marijuana and tobacco smell terrible, for one thing), and at least with marijuana, I don't imagine it is possible to use it without getting the equivalent of an alcoholic "buzz." As a result, the use of marijuana recreationally is wrong, and should not be done under any circumstances.

    As for those advocating the use of hallucinogenics, that is an explicit desire to disconnect oneself entirely from reality, and as such is profoundly immoral; I thought that would be so obvious as to not be even in question.

    I'll grant that I've always leaned on the puritanical side of things, but I don't see any flaw in my reasoning. Nowhere in the arguments or context for the virtues was there room to breach those principles (particularly rationality and honesty, which explicitly require full focus) for recreation.

×
×
  • Create New...