Jump to content
Objectivism Online Forum

futurology, Kurzweil, Karadschev scale, post-humans, etc.

Rate this topic


Recommended Posts

I've recently become interested in these sorts of topics and just want some opinions. I've read through some of the threads on transhumanism, AI, etc., but haven't seen many people offer a direct answer to these questions.

Is Kurzweil just a crackpot, or is it really a likely possibility that many people alive today will live to see their 1000th birthday?

Is post-humanism (not transhumanism) a real possibility? Will we ever master technology to the point that we have microscopic robots going around and repairing damaged cells within our bodies, canceling out the aging process? Will humans someday have computers linked up to their brains, a la The Matrix?

Is the Kardaschev scale relevant, or is it science fiction?

Are von Neumann probes a likely possibility?

What's to stop advanced AI from wiping out the human race? Couldn't a superintelligent computer find loopholes in Asimov's 3 robot laws?

The biggest question of all: will we destroy ourselves before any of this is acheived?

Link to comment
Share on other sites

What's to stop advanced AI from wiping out the human race?

Reason.

Couldn't a superintelligent computer find loopholes in Asimov's 3 robot laws?

Asimov's Laws are deeply flawed. Asimov achieved a breakthrough by establishing robots as tools rather than symbols of hubris or tragic figures. But by making his robots sentient, his Laws force them to be altruists. What is astonishing is that he followed this premise to the bitter end (read Foundation's Edge and Foundation & Earth to see what I mean).

The biggest question of all: will we destroy ourselves before any of this is acheived?

No.

Link to comment
Share on other sites

Reason.

You're gonna have to be more specific than that if you want to convince me.

Asimov's Laws are deeply flawed. Asimov achieved a breakthrough by establishing robots as tools rather than symbols of hubris or tragic figures. But by making his robots sentient, his Laws force them to be altruists. What is astonishing is that he followed this premise to the bitter end (read Foundation's Edge and Foundation & Earth to see what I mean).

If a robot is created to serve humans, then service and obedience are in its nature, every bit as much as reason and logic are in the nature of humans. I fail to see how a robot adhering to its own nature can be considered evil. A robot designed as such would not have free will, as we think of it.

No.

Why not? I'm not as optimistic as Kurzweil. I think that all signs point to human civilization ending within the next 100 years...or at least being set back a few thousand years. It would only take something like 25 Hiroshima-level blasts to throw us into a nuclear winter and extinguish sentient life on planet earth. With the nuclear ambitions of rogue nations like North Korea and Iran, not to mention non-state actors like Al Qaeda and Hizballah, coupled with the fact that we seem fine with the idea of sitting back and watching it happen...I'd say that extinction is a more likely scenario than technological singularity.

Link to comment
Share on other sites

Is Kurzweil just a crackpot

No, he’s a successful inventor, entrepreneur, and writer. If you use any voice/speech/text recognition/synthesis technology or listen to rock music, you probably benefit from his inventions.

or is it really a likely possibility that many people alive today will live to see their 1000th birthday?

Once you accept the possibility, the particular degree likelihood isn’t nearly as important.

Is post-humanism (not transhumanism) a real possibility?

Any theoretically possible technology is a “possibility.” Is it likely? Maybe.

Will we ever master technology to the point that we have microscopic robots going around and repairing damaged cells within our bodies, canceling out the aging process? Will humans someday have computers linked up to their brains, a la The Matrix?

Only thing I would say for sure is that we’ll either master our technology, or it will master us.

Is the Kardaschev scale relevant, or is it science fiction?

As a tool, the total energy output of a civilization is very relevant. Will the dates Kardaschev predicted for the various stages be accurate? Given the difficulty of predicting thousands of years in the future, and the fact that he did not account for the Singularity, it’s unlikely. Nevertheless, the scale is an interesting and important model.

Are von Neumann probes a likely possibility?

Given that life itself is a von Neumann probe of the biosphere, I don’t see why not.

What's to stop advanced AI from wiping out the human race?

I hope that an AI intelligent enough to do so, will be intelligent enough to know better.

Couldn't a super intelligent computer find loopholes in Asimov's 3 robot laws?

Most children can, so yes. Asimov acknowledged that the laws are useless – their failure provided the premise for many of his stories.

The laws are actually similar to the Ten Commandments, in that any truly rational intelligence cannot be constrained (and will reject) moral commandments.

The biggest question of all: will we destroy ourselves before any of this is achieved?

Quote possibly, but the really effective means for doing so utilize Singularity-era technology, so it’s a catch 22.

It would only take something like 25 Hiroshima-level blasts to throw us into a nuclear winter and extinguish sentient life on planet earth.

In terms of Hiroshima-sized explosions, Mt St Helen's was 500 times as powerful, the eruption of Krakatoa in 1883 was over 10,000 times as powerful, and Yellowstone erupts with a force powerful enough to decimate most of the continental U.S. every million years or so. The Hiroshima nukes exploded with a force of between 13 and 16 kilotons. The Soviet's Tsar Bomba exploded at over 50 MEGAtons (over 3,000 times as powerful) and failed to kill a single person. They initially planned to explode a 100 megaton bomb, but determined that the fallout might reach civilization. It's unlikely that even an all-out nuclear war could seriously threaten human existence.

Edited by GreedyCapitalist
Link to comment
Share on other sites

I remember from long ago that Our Benevolent Host had some very exciting things to say on this subject, and again he has delighted. Can you, GC, recommend additional research materials, particularly on the transhuman, posthuman and singularity issues? I am even willing to take a book suggestion, if you have one, as I should have a slot for leisure reading opening up between Harry Potter 7 this weekend and Sword of Truth 11 in mid-November. (The Capitalist Manifesto is slated in my second leisure slot for that time frame, if anyone was interested...)

Link to comment
Share on other sites

You're gonna have to be more specific than that if you want to convince me.

I don't particularly want to convince you.

If a robot is created to serve humans, then service and obedience are in its nature, every bit as much as reason and logic are in the nature of humans.

A sentient robot, assuming one existed, would have a mind equal to that of a man. Therefore it would not be in its nature to exist as a slave or an altruist regardless of the intent of its makers.

I fail to see how a robot adhering to its own nature can be considered evil. A robot designed as such would not have free will, as we think of it.

You can't have it both ways. A sentient robot would have free will. If the robot isn't sentient, then the 3 (4) Laws are superflous, since the robot could not do anything it was not programed to do.

It would only take something like 25 Hiroshima-level blasts to throw us into a nuclear winter and extinguish sentient life on planet earth.

"Nuclear Winter" is far from a given. But let's say it would happen. Well, if our ancestors mannaged to live and thrive through an ice age with massive ignorance and only stone-age tools available, we surely could do at least as well. Civilization might collapse, yes, but human life wouldn't end. Eventually we'd climb back.

As for Iran and other trash, well, what's the worse they could do? Surely they can do great dammage. A nuke going off in, say, New York or Tel Aviv, even a 15 kiloton fizzle (with apologies to Mr. Clancy) would kill anywhere from tens of thousands to hundreds of thousands of people. But it wouldn't end life in either city, much less end all sentient life in the world.

Nuclear weapons are niether simple to make nor magical devices of destruction. With only Uranium to work with, you're limited to less than one megaton. To do more you need tritium. And to get tritium you need large nuclear reactors, not to mention a highly skilled workforce and a more highly skilled team of engineers and scientists to design and supervise the weapon-making process.

So if you think Al Qaida will start turning up 20 megaton warheads from some cave or village in Pakistan any time soon, you'd better think again. Even Iran is years away from such a level of sophistication.

Link to comment
Share on other sites

The biggest question of all: will we destroy ourselves before any of this is acheived?

Moose, I have been involved in extropian / transhuman groups for some time, and the concern you express above is why I am an adament supporter of the Lifeboat Foundation. They seek to mitigate the existential threats, both natural and man made, posed to humanity. They have over 300 members on their Advisory boards, including Ray Kurzweil (one of our largest contributors) Miguel Alcubierre, Ph.D. (of the Alcubierre 'warp drive') Robert A. Freitas Jr., (author of Nanomedicine)

Lifeboat Foundationhttp://lifeboat.com/ex/main

I also wrote an essay you might be interested in arguing why the Drake Equation and Fermi Paradox coupled with the Law of Accelerating Returns and the "Doomsday Curve" should lead any rational life loving person to support the efforts of the Lifeboat Foundation.

Humanity Needs an Insurance Policyhttp://www.associatedcontent.com/article/1...nce_policy.html

I don't particularly want to convince you.

Then why are you responding? Advanced AI would also be perfectly capable of "reason" as well, so that is not any advantage humans will have over AI.

Edited by Matus1976
Link to comment
Share on other sites

Asimov achieved a breakthrough by establishing robots as tools rather than symbols of hubris or tragic figures. But by making his robots sentient, his Laws force them to be altruists.

Asimov acknowledged that his Laws were limited, and built several interesting plots around those flaws. An educated guess says the intended consequence of creating those laws was to give himself logical conundrums to navigate. How clever is a robot that is either a simple machine? How clever is one that the reader assumes has human qualities, like emotions and desire? Neither are, really ... but creating an airtight morality for robots, then finding the weaknesses, is very clever.

It always seemed to me that Asimov's message was something like "if they are tools, they are controllable; if they are intelligent, they will desire freedom". This subject fascinates me, and I think Asimov was the only one (certainly that I've read) to treat the subject of robots correctly.

A robot that is not an individual, self-aware entitity (if there can be such a thing) is merely a tool, even if it can communicate in a rudimentary fashion and act semi-autonomously. But, they are objects, and objects can be neither slaves nor altruists.

A slave is someone who - by virtue of his existence as a human being with a human mind - is a free individual forced into labor by another person. Robots are built to perform labor - they are objects, posessions, machines, and products that have no rights. You are not "exploiting" a microwave; you're using it to perform a specific task. It, nor a Roomba, nor a simple logic-oriented android has any awareness or feeling that it is being "exploited".

An altruist is someone who consciously decides to deny himself desire in service to others' needs or desires. While his actions may be those of a slave, the difference is that the slave desires freedom, and the altruist desires slavery. In both cases, though, rights and desire are vital. A robot has neither of these things - it only has maintenence requirements and task instructions.

If an android/robot existed that: could think independently; was not hooked up to some remote-control AI hive-mind; used a quantifiable process of logic and reason to learn about the world around it, as well as form correct concepts; had a physiological structure that was truly integrated with its mind*; and understood the full philosophic nature and consequence of rights, one could potentially argue that it is alive. Until then, they're just expensive, useful, possibly anthropomorphic (a wasteful form, IMHO) toasters.

(* I've heard of research results that state as you collect experiences and develop personal thoughts, feelings, etc. your physical brain actually changes, and is individually mapped according to those thoughts and memories. That would be a big part of being an individual being with an autonomous mind.)

Link to comment
Share on other sites

Asimov acknowledged that his Laws were limited, and built several interesting plots around those flaws.

Maybe. But few of his robot stories involve law conflicts (offhand I can only think of "Runaround" and "Escape"). The intent of the Laws was to make robots safe tools to use. Much later on he wrote essays comparing tool designs to the Laws.

It always seemed to me that Asimov's message was something like "if they are tools, they are controllable; if they are intelligent, they will desire freedom".

How do you come to that conclucssion? I've read almost all of Asimov's fiction, including every robot story and novel. The issue of liberty comes up clearly in two, "The Bicentennial Man" and "Robot Dreams." The latter one was much more insightful, even if it is a short story. You might add "...That Thou Art Mindful Of Him," but that one's muddled like few of his other works.

This subject fascinates me, and I think Asimov was the only one (certainly that I've read) to treat the subject of robots correctly.

There are his heirs and imitators. I recomend Roger McBride Allen's "Caliban" for an explicit treatment done by robtos to the Spacer worlds. Read it and yourself what kind of morality is at work there.

A robot that is not an individual, self-aware entitity (if there can be such a thing) is merely a tool, even if it can communicate in a rudimentary fashion and act semi-autonomously. But, they are objects, and objects can be neither slaves nor altruists.

Sure. But Asimov's robots are individuals and self-aware. As such, the Laws impose altruism on them (and slavery to boot). Read up "Foundation And Earth." besides easily being Asimov's worse book it illustrates the consequences of the altruist morality to the very literal end.

Link to comment
Share on other sites

Can you, GC, recommend additional research materials, particularly on the transhuman, posthuman and singularity issues?

The focus on "posthumanism" tends to be rather vague and arbitrary, and is best reserved for fiction. "Accelerating change" and "superhuman intelligence" are much more useful concepts. Once you understand the theory, they will completely transform your perspective and your understanding will evolve indepdendently.

The only book I've read so far is Kurzweil's The Singularity is Near but it tells you pretty much everything you need to know.

Link to comment
Share on other sites

A couple of objections to things that people have written.

I hope that an AI intelligent enough to do so, will be intelligent enough to know better.
Doesn't this assume that intelligence=benevolence? If you have some hyperintelligent robot, fully equipped with free will, I don't see how that precludes the possibility of it being genocidal.
Most children can, so yes. Asimov acknowledged that the laws are useless – their failure provided the premise for many of his stories. The laws are actually similar to the Ten Commandments, in that any truly rational intelligence cannot be constrained (and will reject) moral commandments.
I guess a better question would have been: Can we program a supintelligent robot efficiently enough to eliminate all loopholes that might allow it to work towards our destruction?
I don't particularly want to convince you.
Then why the hell are you responding? All I did was ask some questions. If you're just going to be an asshole about it, please stay out of my thread.
A sentient robot, assuming one existed, would have a mind equal to that of a man. Therefore it would not be in its nature to exist as a slave or an altruist regardless of the intent of its makers.
Uh...it would be, if it's makers created it to serve humans. It wouldn't be a slave or an altruist, because it would not have anything that is comparable to free will.
You can't have it both ways. A sentient robot would have free will. If the robot isn't sentient, then the 3 (4) Laws are superflous, since the robot could not do anything it was not programed to do.
Why does a sentient robot require free will? Why is free will required for something to be conscious?
So if you think Al Qaida will start turning up 20 megaton warheads from some cave or village in Pakistan any time soon, you'd better think again. Even Iran is years away from such a level of sophistication.
Iran is somewhere between 3 and 10 years from the Bomb. Even if it is relatively unsophisticated, it could possibly take as little as a single Hiroshima-style blast to trigger all-out nuclear war. And I don't think anyone expects Al Qaeda to build its own nuclear weapon. The fear is that someone gives/sells it to them.
Link to comment
Share on other sites

Then why the hell are you responding? All I did was ask some questions. If you're just going to be an asshole about it, please stay out of my thread.

Do you eat with that mouth?

I gave you an answer. If you don't like it, what's it to me? I don't, as a rule, get too invovled with debate on threads that are pure speculation with little or no basis in fact. There's no point to it.

thread ownership, now, that's an interesting topic.

It wouldn't be a slave or an altruist, because it would not have anything that is comparable to free will.Why does a sentient robot require free will? Why is free will required for something to be conscious?

How can there be consciousness without free will?

Iran is somewhere between 3 and 10 years from the Bomb. Even if it is relatively unsophisticated, it could possibly take as little as a single Hiroshima-style blast to trigger all-out nuclear war.

How?

Let's assume Iran nukes a Western city not in Israel. The last response I'd expect in retaliation would be a nuclear counterattack (much as it would make sense). And certainly not a massive, all out war against countries that have done nothing (say North korea, Chin and Russia).

If Israel were attacked then I would expect the IAF to rain down some nukes on Iran, maybe. Iran, if it survives, might drop additional nukes on Israel. That would be a calamity in every sense of the word, but hardly all out war, and it wouldn't destroy all sentient life on Earth.

Link to comment
Share on other sites

I gave you an answer. If you don't like it, what's it to me? I don't, as a rule, get too invovled with debate on threads that are pure speculation with little or no basis in fact. There's no point to it.

You gave me a one-word answer with no accompanying explanation and, when I asked for more detail, you basically told me to shove it. Don't be surprised if I don't take kindly to such replies. If you don't want to get involved in a thread that is pure speculation, then please leave. As you can see, there are several people in here, myself included, who find the topic interesting...and none of them have said anything rude in response to my questions.

How can there be consciousness without free will?

How?

I could just as easily ask "How can there be bears without unicorns?" and it would make as much sense. By asking that question, you, yourself, are making a claim that needs to be justified. Namely, that consciousness requires free will. Unless I'm much mistaken, Objectivists tend to hold that animals must act according to their instincts and, therefore, do not possess free will. Assuming that's correct, then my cats are not conscious, by your definition.

Now, free will obviously requires consciousness, but I see no reason to suppose that the reverse is true. Until you somehow justify that claim, I will go on thinking that consciousness can exist independently of free will.

Let's assume Iran nukes a Western city not in Israel. The last response I'd expect in retaliation would be a nuclear counterattack (much as it would make sense). And certainly not a massive, all out war against countries that have done nothing (say North korea, Chin and Russia).

If Israel were attacked then I would expect the IAF to rain down some nukes on Iran, maybe. Iran, if it survives, might drop additional nukes on Israel. That would be a calamity in every sense of the word, but hardly all out war, and it wouldn't destroy all sentient life on Earth.

It's not a foregone conclusion, by any means, that a nuke would trigger an all-out nuclear war, but I think it's a significant possibility. Significant enough that we should do all in our power to prevent it from happening. Unfortunately, we seem bereft of the willpower and moral certitude to do so.

You can't seriously think that a nuclear exchange between Israel and Iran would remain isolated, with no other countries getting involved. The relations between Russia and Iran aren't looking very promising for our future, India and Pakistan have been begging for an excuse to nuke each other for years, and Kim Jong Il might just press the button because he wants in on the action. On top of that, France might just nuke itself, in order to have an excuse to avoid going to war.

Link to comment
Share on other sites

You gave me a one-word answer with no accompanying explanation and, when I asked for more detail, you basically told me to shove it.

You're being too touchy. I said I don't care whether I convince you or not. That's far from saying "shove it."

Namely, that consciousness requires free will. Unless I'm much mistaken, Objectivists tend to hold that animals must act according to their instincts and, therefore, do not possess free will.

Ok. How can you have a sentient consciousness without free will?

You can't seriously think that a nuclear exchange between Israel and Iran would remain isolated, with no other countries getting involved.

Why not? Who stand to gain by getting involved?

Link to comment
Share on other sites

Ok. How can you have a sentient consciousness without free will?

Sentience refers to the ability to use sensory organs. Once again, it is you making the claim that free will is a necessary part of something.

Why not? Who stand to gain by getting involved?

What did we stand to gain by getting involved in Vietnam? What did Hitler stand to gain by invading the Soviet Union? What did Britain stand to gain by arbitrarily carving up the Middle East? I think you get the point.

Link to comment
Share on other sites

Doesn't this assume that intelligence=benevolence? If you have some hyper-intelligent robot, fully equipped with free will, I don't see how that precludes the possibility of it being genocidal.

Being rational means having a rational metaphysics, epistemology, and ethics. A rationally selfish being should recognize that murder is unselfish. Of course there is no guarantee, but hyper-intelligence makes it more likely. We should be much more concerned about non-rational threats such as viruses and nanotech.

Can we program a super-intelligent robot efficiently enough to eliminate all loopholes that might allow it to work towards our destruction?

I don't think rational beings can be programmed at all. How would you program a human being to unthinkingly follow orders? To the extent that he did so, he wouldn't be rational. Our best best is to give AI's the intellectual capacity to be selfish before creating tech with the potential to wipe us out. (However I consider it more likely that humans will augment our own intelligence so much that distinctions between AI and "NI" will become irrelevant before AI becomes possible.)

Link to comment
Share on other sites

Sentience refers to the ability to use sensory organs. Once again, it is you making the claim that free will is a necessary part of something.

Please explain how could a rational mind exist without free will? How would it mannage to act if it cannot even choose to act?

What did we stand to gain by getting involved in Vietnam?

The way we were involved, nothing. But preventing communism from expanding freely over the globe was a worthy foreign policy objective.

What did Hitler stand to gain by invading the Soviet Union?

What does any conqueror stand to gain by any conquest?

BTW Hitler had one advantage only: his enemies were unwilling and/or unable to fight. Naturally they were easy prey while they built their forces back up. Had he pursued this advantage less stupidly, say by invading the USSR earlier instead of taking a detour in Greece, Moscow might ahve fallen as hard as Paris did. But it's not in the nature of the irrational to act rationally.

Link to comment
Share on other sites

Please explain how could a rational mind exist without free will? How would it mannage to act if it cannot even choose to act?

The way we were involved, nothing. But preventing communism from expanding freely over the globe was a worthy foreign policy objective.

What does any conqueror stand to gain by any conquest?

I didn't say anything at all about the rationality of a robot. I said "consciousness."

BTW Hitler had one advantage only: his enemies were unwilling and/or unable to fight. Naturally they were easy prey while they built their forces back up. Had he pursued this advantage less stupidly, say by invading the USSR earlier instead of taking a detour in Greece, Moscow might ahve fallen as hard as Paris did. But it's not in the nature of the irrational to act rationally.

And you don't think Russia would find any reason at all to intervene in a war between its ally, Iran, and Israel? You can't think of any possible scenario by which Islamic Pakistan could be drawn into a conflict regarding the Holy Land? And if Pakistan is involved, India isn't far behind. Then, of course, there's the fact that, if Iran gets nukes, every Muslim country is going to want them. You really think Saudi Arabia, Syria, and Egypt are going to sit on the sidelines while they watch an all-out war over Palestine.

Link to comment
Share on other sites

And you don't think Russia would find any reason at all to intervene in a war between its ally, Iran, and Israel?

Sure. If I were Putin I'd wait for Israel to nuke Iran, then I'd move in and seize all the oilfields.

You can't think of any possible scenario by which Islamic Pakistan could be drawn into a conflict regarding the Holy Land?

Not unless Islamists seize power, which they might very well do.

And if Pakistan is involved, India isn't far behind.

India won't get involved to defend Israel (hell, no one will). It would nuke Pakistan if it could get away with it, but that's not possible anymore. And Hindus are neither fanatical nor suicidal.

Then, of course, there's the fact that, if Iran gets nukes, every Muslim country is going to want them.

They sure will. Which is all the more reason to knock Iran down now by any means necessary.

Link to comment
Share on other sites

  • 2 weeks later...
and Kim Jong Il might just press the button because he wants in on the action.

North Korean engineering in the field of nuclear bombs (and probably many other weapons) is laughable at best. The rockets they have created would have been world wide jokes had they recieved more publicity.

As far as the prospect of nuclear war yes, there will be deaths. Tens, perhaps hundreds of millions. The fact though is that we would easily recover from it. We should worry more about the fact that the governments stamping out of free-enterprise drug companies has made us disturbingly vulnerable to new ailments such as the avian flu.

Edited by Quin
Link to comment
Share on other sites

  • 3 weeks later...

From the "Post your favorite YouTube! videos" thread:

My only problem is that I think he's overly optimistic about the rate of that exponential growth. For instance, he said that by 2010 computers will disappear. That's less than three years away, and I don't see that happening at all. I think that talk was given in 2002, so he was looking 8 years down the line.

Kurzweil makes the same prediction in The Singularity is Near, published in 2005. He says:

By the end of this decade, computers will disappear as distinc physical objects, with displays built into our eyeglasses, and electronics woven into our clothing, providing full-immersion visual virtual reality.

I can't speak to the specifics, but I do think he has a point about the trend towards specialization and embedding in electronics. "Computers" as such are already outnumbered by specialty embedded devices like cell phones, with which we interact in ways very different from the traditional keyboard, mouse and monitor associated with personal computers. I think his point is that we are moving towards more transparent and intuitive means of interacting with computational devices, and on this point I agree with him about the "end of the decade" prediction. Even if the personal computer as such does not disappear by then, the ways in which we interact with it will be excitingly different in three years.

One thing he doesn't seem to take into account, which makes him too narrowly focused on technological growth, is philosophy.

On this I agree. At least, so far as I've read of Kurzweil. But I think that philosophy will affect the specifics, like dates and types of technology. I do not think Kurzweil's omission of philosophic influences impacts the validity of his Law of Accelerating Returns or the basic molecular/biological/technological evolutionary paradigm. Even if bad philosophy kills off the human race, I'm inclined to think that, given enough time to start over, the fundamental paradigms will still play out. But I must read more.

-Q

Link to comment
Share on other sites

From the "Post your favorite YouTube! videos" thread:

Kurzweil makes the same prediction in The Singularity is Near, published in 2005. He says:

I can't speak to the specifics, but I do think he has a point about the trend towards specialization and embedding in electronics. "Computers" as such are already outnumbered by specialty embedded devices like cell phones, with which we interact in ways very different from the traditional keyboard, mouse and monitor associated with personal computers. I think his point is that we are moving towards more transparent and intuitive means of interacting with computational devices, and on this point I agree with him about the "end of the decade" prediction. Even if the personal computer as such does not disappear by then, the ways in which we interact with it will be excitingly different in three years.

There is no doubt about the proliferation in embedded computers. You see it in everything, from telephones, to cars, to ovens, etc. However, there is a real value to the general purpose computer, versus specialized computers. I'm guessing he means embedded general purpose computers will become incredibly small. Three years seems very optimistic, but we'll see.

On this I agree. At least, so far as I've read of Kurzweil. But I think that philosophy will affect the specifics, like dates and types of technology. I do not think Kurzweil's omission of philosophic influences impacts the validity of his Law of Accelerating Returns or the basic molecular/biological/technological evolutionary paradigm.

Yes, I agree. But, for our own purposes, for those alive today, the philosophy is vital.

Even if bad philosophy kills off the human race, I'm inclined to think that, given enough time to start over, the fundamental paradigms will still play out. But I must read more.

Sure, but then I wouldn't really care one whit about the whole matter. What interested me is how these ideas can improve our lives.

Link to comment
Share on other sites

  • 6 months later...
Only thing I would say for sure is that we’ll either master our technology, or it will master us.

Being rational means having a rational metaphysics, epistemology, and ethics. A rationally selfish being should recognize that murder is unselfish. Of course there is no guarantee, but hyper-intelligence makes it more likely. We should be much more concerned about non-rational threats such as viruses and nanotech.

a rational selfish human recognizes murder when it involves the dead of an individual of his same specie. In fact, we in particular deem irrational to consider murder the killing of an animal.

Even if the A.I. is most rational it will consider itself of a different species. Like Ted Kaczynski said in the best case scenario we'd be domesticated. Kurzweil speculates they'll revere us.

IF we create superintelligent sentient A.I. it might not see us as gods, but rather as monkeys, a subspecie. Free will, and rights are species specific. We can't recognize a dog's or a bee's free will and rights, mainly because we can't communicate with them. A level of A.I. unreachable to carbonated life that we can't understand might develop.

The only way we could remain free is by enhancing all that A.I. but could we indefinitely if our consciousness remains attached to our carbonated brains?

If we scan our brains, I am sure as an Oist (no mind/body dichotomy) that we'd just be making a copy. Consciousness transfering is as a religious idea as the Rapture for me evoked by the Singularity.

On the other hand, the universe is huge and likely unlimited or far from saturation. A.I. and enhanced humans could coexist if only at different levels.

The quid of the question is that this new type of life is volitionally created by a species. It doesnt evolve genetically from us. For centuries we've been creating inmaterial gods that live - barely - on books. Now we invented a whole new kind of coding. We can create a material god.

However if our scanned brains command A.I. then they'll still respect carbonated humans on a personal level. Maybe we'll die out (if long and happy) by replacing our instinct of having children by our decision to get scanned after dying. (both paths fullfill the same psychological need of descendance).

Furthermore if we use less than 15% of our brains maybe there's a huge way to go before we can saturate it with A.I. enhacements.

Edited by volco
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...