Jump to content
Objectivism Online Forum

Could cybornithics be the next step in evolution?

Rate this topic


DarkFather

Recommended Posts

Afew days ago I was watching a short documentary about a British man who has been classed as the first human cyborg. He voluntered to have electronic implants injected into his nervous system and now with the power of thought he can control a robotic arm.

It is believed that this technology could help those who have lost limbs and those with disibilaties. This could also benifit all of mankind by enhancing our sences and entergrating our minds with computers to send and receive data throught pure thought.

What do you people think of this technology?

Link to comment
Share on other sites

I've heard about that. Nah, no big philosophical significance, but it's still pretty damned cool! I wouldn't mind plugging myself into the Library of Congress. ;-)

Re: your title, I'm not big on "next step in evolution" claims. Are artificial hearts a step in evolution? Were peg legs? Is asprin? It just reminds me too much of movie reviews including the phrase "an emotional rollercoaster ride of thrills and chills!!!" -- overused and unnecessary. Evolution is one thing, self-improvement using technology is another, and the second is perfectly impressive without bringing in the first.

Link to comment
Share on other sites

Actually this is interesting. If you go to the 'Enlightenment Test' thread under the Miscellaneous section and if you scan the Transtopia site you will read alot about their interpretation of the future of evolution. I summarized it somewere in that thread. They're a nutty bunch but it is aa interesting topic. They also link to another site which hints at the same thing: Psycho-Cyberntica.com (I think).

Anyway, their ultimate forecast for the human race is what they call the 'Singularity' or the 'Event Horizon' at which point one of two things will happen: 1) man's machines will become super-intelligent and treat man like farm animals ala the Matrix or 2) man will be forced to merge with machines thus ushering in a 'post-human' era. They refer to this as the end of genetic evolution (nature controlled) and the begining of 'memetic evolution' (technology controlled).

Its there version of an armegeddon story; totally riddled w/ the malevolent universe premise.

But it is interesting. I often wonder what humanity would look like say 1000 years into an Objectivist society.

Link to comment
Share on other sites

I wonder if the appeal of such scenarios is the possibility of realizing primacy of consciousness, i.e. a world of "pure thought" where all that is required to exist is an action of the mind. Many people seem to crave such a world -- maybe they are holding out the hope that some kind of psycho-cyber technology will make it possible.

Link to comment
Share on other sites

Tall and angular? ;-)

I think you know what I meant. What would be the lifespan? Would there be immortality? What would genetic engineering accomplish? Those type of things. Whatever the answers, I don't see why some futurists who hypothesize these things always have to portray it with some post-armegeddon, man vs. machine type of scenerio.

Well maybe I do. If you hold a malevolent view of the universe coupled with a distrust for technology and a view of man as inherently depraved then it would follow.

Link to comment
Share on other sites

Its there version of an armegeddon story; totally riddled w/ the malevolent universe premise.

I am a transhumanist, one of the people you're referring to, and you've completely misunderstood our position. It looks like I have a few things to explain.

Transhumanism

Transhumanists tend to be a rather diverse group, but I'll do my best to state the philosophy in my own words. Basically, transhumanists think that science can be used to improve the nature of the human condition, and that this prospect is desirable. The means of improvement has been proposed to consist of some combination of germline engineering, human machine interfaces, pharmacology, nanotechnology, artificial intelligence, or other methods. Individual transhumanists have their personal favorites among these methods. For example, since I'm a Biologist, I tend to be more interested in genetics.

The Singularity

Let's move on to the singularity. If you look at the progress of science and technology throughout history it has been rather slow until the last few generations, where it has increased dramatically. A few people even characterize this growth as exponential. Now, if it is true that the growth of technology is exponential, we may be about to experience the part of the curve where it becomes almost vertical. This is the singularity. What happens after this point is anyone's guess, but the vast majority of transhumanists think the results will be beneficial, because we do not subscribe to the malovolent universe premise. Personally, I'm a little skeptical about the singularity. Most of the people interested in this idea work on the computing side of technology, and have personally experienced Moore's Law while the rest of us have only noticed it peripherally. I don't believe that extrapolating the progress of computer science to all science is particularly valid, though I do acknowlege the possibility of specific scenarios where this could be the case. So, even though I think the singularity is worth working towards, I'm not sitting around on my couch waiting for the techno-rapture.

Artificial Intelligence

Artificial intelligence is generally proposed as the catalyst for the singularity, though this doesn't neccessarily have to be the case. Like the singularity, AI is contingent upon many conditions, and may not be possible. However, if a reasoning machine arises it doesn't necessarily have to be hostile. In fact, since it would be capable of reason, it could understand concepts like rights and ethics, and may even be able to implement them more effectively than you or me.

In addition, since I define a human as a creature capable of using reason, I would also consider an AI to be human. As a result, it would possess the same rights as you or me, and I would be prepared to fight in their defense.

The hostile incarnation of AI is the one most seen in the movies, since it makes for dramatic plots. However, I think this outcome is unlikely to occur in the real world.

Transhumanism and Objectivism

Transhumanism is a rather broad philosophy, but most transhumanists share a lot in common with the average objectivist. For example, while I don't currently consider myself to be an Objectivist, I admire a lot of Ayn Rand's work, and find that it frequently intersects with my own personal philosophy. However, there are quite a few people who do consider themselves to be both objectivists and transhumanists.

Link to comment
Share on other sites

Based on Thjatsi's description, Transhumanism is not a philosophy at all. It seems to be merely the belief that technology is good, connected with a few other, unjustified, beleifs (such as this "singularity" nonsense).

In any case, Transhumanism seems to be a floating abstraction. As a result, it is a bad idea to associate oneself with such a group/movement.

Link to comment
Share on other sites

I'm finding it difficult to address your concerns, due to the brevity of your response. In the past, my solution to this problem has been to answer the questions I think are being raised. After I do this, the person I'm conversing with generally states that I misunderstood their position, and throws out a few new offhand remarks. This process repeats itself until I get sick of the dialog and discontinue it. I don't know if you're trying to play this game, or if I'm just jaded from attempting to communicate with postmodernists. In any event, I think we need to be more clear about the concepts we're using. Therefore, I would like you to define philosophy, and the criteria that give a series of linked concepts philosophy status.

Given our setting, I'm assuming that we're definining 'unjustified belief' as 'logically invalid' or 'not arrived at through the use of reason'. In this case, I would like you to explain which specific concepts makes this error, and at what steps. If this is not the case, I would like you to define this phrase.

After all, if we can't be clear about what we're saying here, then where can we be?

Link to comment
Share on other sites

People are accustomed to talking of Natural Selection and Evolution in the same breath, as though they were the same thing. Evolution has many mechanisms, and natural selection is a relatively under developed theory that pertains to biological ascendancy.

Human beings have transcended Natural Selection as their primary means of evolution through the creation of society. It is the ability to use tools/technology to adapt to our environments, and social awareness to communicate with each other that have allowed us to define ourselves through our decisions as groups. The phylogenetic make-up of humanity can now be changed at a whim - for instance Hitler had a marked effect when he destroyed a large element of the Jewish population, and in Australia numerous aborigine bloodlines no longer exist, or are contaminated.

Rather than diving into the deep end and offering various technological modifications to humanity at large, I would share it between people of a common cause, keeping it unadulterated and for the values they share. The idea of people demanding technological upgrades off the welfare system disgusts me.

Link to comment
Share on other sites

[A philosophy is a full system including Metaphysics, Epistemology, etc...

Based on your description, Transhumanism is merely a single ethic--particularly, technology is good--and a small collection of beliefs about what is coming in the feature. This is why I call Transhumanism a floating abstraction, it contains only one philosophical view.

As for your other question, I define unjustified, in this case, to mean that you have not justified them. As a matter of fact, you have hardly defined them...

"Singularity," it seems, depends on the idea that human production may be pre-calculated with exponential functions. You have defined it as the point where the function grows very fast. Even if we could rely on human production to keep with the exponential function, the implications of what you call "the singularity" are merely that we will get better technology very often. As you say, what that technology will do is anyone’s guess.

My main point though, is that while the set of beliefs held by Transhumanists may be completely correct, that set of beliefs is a floating abstraction, and one should avoid associating oneself with it.

Link to comment
Share on other sites

A philosophy such as Christianity, Kantianism, or Objectivism, is a set of principles regarding the nature of existence, of man, and the relationship between the two. No man can live without a philosophy, even if an implicit hash combining the fourth grade teacher's moralizing, that old ketchup commercial, and one's partially digested conclusions regarding the final acre of rainforest which was chopped down last year.

Any body of ideas must ask and answer questions in each of five branches: metaphysics, epistemology, ethics, politics, and aesthetics. Basically what is reality, what can man know about reality and how can he know it, how ought to view his choices and what is the proper purpose of his life, what rules ought to govern how man may interact with his fellow, and how can literature, poetry, painting, sculpture, and music inspire man.

In certain contexts, one can use the word "philosophy" to mean something much narrower, such as a manager's business philosophy or a programmer's software development philosophy. This is a looser use of the term than is normally meant, which is ok, so long as all parties don't think that object-orienation is a philosophy to guide man's thought and actions in life!

A belief in a "singularity", the meaning of which is anyone's guess, constitutes faith in the arbitrary.

I'd like to dissect one quote, because I think a lot hangs on it.

Basically, transhumanists think that science can be used to improve the nature of the human condition, and that this prospect is desirable.
Most Objectivists will be suspicious of this (as I think you said you've seen elsewhere, and as we've seen here). The reason is that obviously, transhumanist is more than just that. Let me illustrate my point with an example reductio ad absurdum.

"Carpetists tend to be a rather diverse group, but I'll do my best to state the philosophy in my own words. Basically, carpetists think that clean carpeting, free from mold and soil, with fibers lifted up straight, is the best-looking and most comfortable to walk upon."

First, the diversity of the group belies the claim that transhumanism is a philosophy. Objectivism is not a diverse group: we are all rational, proud, selfish, productive, independant, etc.

Second, this example escalates the importance of something to an unwarranted degree. Who would stand for dirty, smelly carpeting? But that's not the point. Who would dedicate his life to oppose carpet mites as such?

If transhumanism has a meaning, then what is it? The word itself implies to me that it's about ceasing to be human as I (or someone?) transitions to a different kind of being. I am not sure this is desirable, but I guess I would have to have a clear definition. I think it is not appropriate to use this word to mean merely a career in biology, medicine, or prosthetics. And, as per my point above, if the word is appropriate, it must mean something big and broad. One way to define it is "Transhumanism is the belief that man will meld with machine, and become a new kind of being."

Without a clear concept of such "melding", I think it would not be appropriate to coin such a word, much less say that one is an adherent to it.

most transhumanists share a lot in common with the average objectivist. For example, while I don't currently consider myself to be an Objectivist, I admire a lot of Ayn Rand's work, and find that it frequently intersects with my own personal philosophy

I couldn't think of a better way to set an Objectivist's teeth on edge.

First of all, we're all too familiar with the libertarians. When they're not outright claiming that their view is the same as ours, they claim to have a lot in common. A sports car and a washing machine are both covered in sheet metal skins with baked-on paint.

In any case, "similarity" must properly imply that the essentials of the two things are the same. What the libs mean, and what I take it that you mean, is that many of the superficial things, many of the concretes are the same. The libs want lower taxes, the Objectivists want lower taxes, ergo the two are similar.

The problem is that Objectivism is an integrated whole, and its essence is the concept of objectivity which it shows must be integrated with one's view on everything. There is nothing else similar to this; either one integrates it, or one does not. If one reads all about it, and adopts some pieces/parts, that is as anti-objectivist an approach as I could think of.

Finally, is your choice of words "intersect". I think that implies 10th grade math. You have two "sets" consisting of arbitrary lists of things, and then one can perform the union or intersection operations. Of course, in math, there was no principle to govern what ought to be in a particular set. One had bread, milk, cheese, and a chevrolet corvette z06. The other had a mustang, corvette, and a book. Intersection was simply manipulation of arbitrary symbols.

You may say that I am reading too deeply into this. Perhaps I am. But my point is that if one takes ideas seriously, one does not speak of two incompatible philosophies in terms of set theory and the intersection of a few particular concretes.

One does not look at philosophy like a chinese menu: one premise from column A and one from B.

Link to comment
Share on other sites

A poster stated that, though he does not consider himself an Objectivist, he does admire a great deal of Ayn Rand's work and finds some commonality of such with his own philosophical views. In response, "Bearster" replied:

[i couldn't think of a better way to set an Objectivist's teeth on edge.]

No. A better way to set "an Objectivist's teeth on edge" was the harshness of the reply which "Bearster" made. The Objectivist philosophy embodies a benevolence towards men, and we do not automatically treat as the enemy someone who admires Ayn Rand but has not fully accepted her philosophy.

"Bearster" took an innocent remark -- an expression of admiration and commonality of ideas -- and transformed it into a commonality of "superficial things" and likened the poster to "libertarians" who also "claim to have a lot in common." Personally, I have no idea if that person who admired Ayn Rand is a libertarian or not, nor do I know if the commonality he feels is for superficial things, but I do know, with complete and absolute surety, that such judgments are not contained in the words which "Bearster" responded to.

In further response to the poster, "Bearster" also claimed

[if one reads all about it, and adopts some pieces/parts, that is as anti-objectivist an approach as I could think of.]

That would be a great surprise to Ayn Rand, who, though she disagreed with much

of Aristotle's philosophy, admired and accepted "some pieces/parts," especially Aristotle's epistemology.

The main point here is that, although it is certainly true that Objectivism is a completely integrated philosophy, not all people grasp all parts immediately, nor do they instantaneously integrate the philosophy as their own. Such people are not to be assumed to be our enemies, and they should not be automatically likened to libertarians or assumed to have only "superficial things" in common with Objectivism. Objectivism is a philosophy devoted to life, reason and value, not a schoolmaster's rod to automatically reprimand any innocent soul who wanders into the school.

Link to comment
Share on other sites

Now, if it is true that the growth of technology is exponential, we may be about to experience the part of the curve where it becomes almost vertical.

Did he just contradict himself? An exponential curve y=e^x has slope at every point (x,y) of dy/dx=e^x. Meaning, the curve is "almost" vertical only where its slope is "almost" infinite, and the slope is "almost" infinite only where x is "almost" infinite. The curve always grows and continues to grow faster and faster, but it never approaches vertical as long as x is finite. :nerd: Thus, there can be no singularity.

Printed using 100% recycled electrons.

Proton/neutron waster. :angry:

Link to comment
Share on other sites

  • 2 weeks later...
natural selection is a relatively under developed theory that pertains to biological ascendancy.

That's quite a claim. If by underdeveloped you mean "still has a lot of prima facie mysterious brute facts to explain," then you are correct.

If you mean "just another convenient place-holder until a better explanation arises for the development of life systems into greater levels of complexity and self-organization," then you are ceding territory to Creationists that they do not deserve.

Also, how are you defining "ascendancy"?

Link to comment
Share on other sites

My guess is that extending life spans even to 150 years will be devastating. Where you gonna put all these super people, how you gonna feed them all. World economy will topple, Social security for sure. Will there be people making factories, if so, when do you begin implanting, 5 yrars old, 10 years or only if something breaks. Who pays for it, or will it be for the rich only. Will it cost an arm and a leg for an arm and leg :P

Link to comment
Share on other sites

My guess is that extending life spans even to 150 years will be devastating.  Where you gonna put all these super people, how you gonna feed them all.

Thomas Malthus said the same thing when world population was under a billion and the average life span was under 30 years of age.

Malthus was wrong.

If you don't see why, read The Ultimate Resource by Julian Simon.

Link to comment
Share on other sites

My guess is that extending life spans even to 150 years will be devastating.  Where you gonna put all these super people, how you gonna feed them all.  World economy will topple, Social security for sure.

It has been shown that as societies advance in technology/education, population growth decreases. I'm sorry that I don't have any actual statistics available to me at the moment, but I'm sure they shouldn't be too hard to find.

Also, it is my contention that a man who expects to live 150 years will not be in as much of a hurry to procreate (among other things like marriage, retirement, etc.) as a man who only expects to live 75 years. If the average person starts waiting until around 40 or 50 to have children, population growth will definitely decrease (this assumes that we will also have technology availabe to extend child-bearing years as well).

P.S. Social security will bankrupt whether population growth escalates to uncontrollable levels or not. Good riddance. (But I still want my money back :P )

Link to comment
Share on other sites

Thomas Malthus said the same thing when world population was under a billion and the average life span was under 30 years of age.

This may seem like a nit-pick, but there is a significant difference between "average life span" and "average life expectancy"(though they are widely conflated).

"Average life span" is the description of the upper limit of years on earth that a healthy member of the human species can expect to get. In that sense, the average life span has not changed from the start of civilization until today. There were 105 year-olds in ancient Sumeria just as there are today. What has changed, and rapidly with the onset of industrialization, is "average life expectancy"--that is, how many human beings can expect to live out a time on earth that approaches this upper limit.

What is new, and what bio-tech will only reinforce, is an actual increase in the upper limit of years that define the human life span. If the secret of reversing cell death is acquired, then humans will face the prospect of some previously inconceivable long lives.

As you point out, I don't think that this will have any effect on our ability to use resources more efficiently and sustain life on earth. Malthus has been disproven again and again. I do think, however, that increasing the human life span, as opposed to average life expectancy, may have some impact on the existential perspectives that human beings adopt towards their lives, their attitudes toward death, and ultimately the kinds of choices that can dictate the overall happiness or unhappiness of a life (for example, people might be more likely to say, "I can always make amends later" or "another girl like that might come around").

Link to comment
Share on other sites

"Average life span" is the description of the upper limit of years on earth that a healthy member of the human species can expect to get.
Not really. You're talking about maximum life span. Biogenrontologists make a distinction between maximum life span and average life span, which is roughly equivalent to average life expectancy.

Now, you can contrast average life expectancy at birth with average adult life expectancy. When people say that life expectancy has been under 30 for the vast majority of human history, they mean life expectancy at birth. The lucky ones that made it to adulthood usually lived to 40 or 50, although a rare few did make it to very old age, as you said.

What is new, and what bio-tech will only reinforce, is an actual increase in the upper limit of years that define the human life span.
Maximum life span has not increased. It has remained around 110 years throughout history (with a few outliers: the highest made it to 122).

Average life span (life expectancy) has increased spectacularly in the last century: in America it went from about 45 in 1900 to almost 80 today. However, this has been accomplished with no retardation of the basic rate of aging. It reflects improvements in medicine, sanitation, and diet. But even if every disease on the planet were eliminated, average life span would increase only to about 85 or 90--while the very longest any person could live would remain exactly the same.

What you are certainly right about is that biotechnology will significantly increase the maximum human life span, perhaps even within the next half-century. As far as I'm concerned this is the single most exciting field in science today.

Incidentally, to my knowledge the only scientifically proven instances where maximum life span has increased for any species are:

--genetic manipulation in simple organisms like yeast and roundworms

--breeding programs (in flies for instance)

--and caloric restriction diets

and rodents are the most advanced species with which caloric restriction has been shown to work. There is fairly good evidence however, that it works in monkeys, and perhaps even humans. Some scientists have predicted that a caloric restriction diet adopted in early- to mid- adulthood could add several decades to your life.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...