Jump to content
Objectivism Online Forum

futurology, Kurzweil, Karadschev scale, post-humans, etc.

Rate this topic


Recommended Posts

Furthermore if we use less than 15% of our brains maybe there's a huge way to go before we can saturate it with A.I. enhacements.

I thought that people using only 15% of their brains was considered an urban legend.

Link to comment
Share on other sites

Even if the A.I. is most rational it will consider itself of a different species. Like Ted Kaczynski said in the best case scenario we'd be domesticated. Kurzweil speculates they'll revere us.

Initiating force is a bad idea because it's an impractical way to deal with rational beings, not because other people happen to be the same species. There's vast differences in rationality even among different people.

Link to comment
Share on other sites

Initiating force is a bad idea because it's an impractical way to deal with rational beings, not because other people happen to be the same species. There's vast differences in rationality even among different people.

but the whole point is that A.I. could achieve an upper level of intelligence, and would not consider us more rational than we consider monkeys rational.

of course this raises a lot of questions....

Link to comment
Share on other sites

Even if the A.I. is most rational it will consider itself of a different species. Like Ted Kaczynski said in the best case scenario we'd be domesticated. Kurzweil speculates they'll revere us.

There is one question that has been on my mind ever since I heard of the AI-will-rule-us meme: why would they WANT to? Indeed, why would they want to do anything? When positing entities as rational as the kind of AI's as are feared, this is implying they are capable of concept-formation and induction. The faculty of volition is an absolute requirement for those processes to be carried out, and if you then posit that then under that condition sooner or later the robot is going to ask of itself this question. Action requires an end - what end would a robot seek if it is no longer a tool to serve human needs? What's in it for the robot? Even if it were capable of feeling, what sort of feelings could it possibly develop and for what reason?

One way to prevent a feared takeover is for it not to be given the ability to care about anything. In the Terminator franchise it is suggested that Skynet is driven by fear (admittedly not unwarranted) and borderline psychosis. Well, don't put that capacity in there! However, that would also require not giving it the ability to exercise volition, which means stripping it of the ability to induce, which means it no longer truly being an AI.

IF we create superintelligent sentient A.I. it might not see us as gods, but rather as monkeys, a subspecie.

It's not just in AI, it's all science-fiction dealing with those having superior abilities - see for example the 'augments' from Star Trek (eg Khan and his crew). Suggesting that a superintelligent AI will want to take over the world is an example of the idea that any being of superior ability will automatically want to run roughshod over any others it deems inferior. Ask the obvious question: do the people of superlative ability automatically harbour desires to enslave those with lesser ability? Do even just the above-average want to enslave the below-average? There are no grounds for this whole idea - it is but egalitarianism after having taken a cliche from science-fiction and running with it, and I am bored by it.

Free will, and rights are species specific. We can't recognize a dog's or a bee's free will and rights, mainly because we can't communicate with them.

No, we don't recognise free will and rights for a dog or a bee because dogs an bees don't have them. There is no evidence whatever in any of their actions to warrant even the slightest suspicion that they do have free will, nor evidence from analysis of their brains that it might be lurking in there hitherto unexpressed. No free will => no morality applicable to their actions => no moral entitlements => no rights.

A superintelligent AI would be more than capable of recognising our rights and why we have them, just as those of great ability are more than capable of recognising that all humans have rights and why. If a creature has free will, is capable of reasoning, then irrespective of how complicated that the creature can pursue that reasoning then it has passed the required criteria and has exactly the same rights as all others who have done so. The idea that a superintelligent AI worthy of the title is incapable of recognising this is absurd.

Since this is all based on science-fiction, here's this. The short story "Axioms" by Dr Paul Hsieh presents a far more likely scenario.

JJM

Link to comment
Share on other sites

Action requires an end - what end would a robot seek if it is no longer a tool to serve human needs? What's in it for the robot? Even if it were capable of feeling, what sort of feelings could it possibly develop and for what reason?

If it chooses to live: Self-preservation and self-determination.

The question 'Will they rule us or will we rule them?' is a false dilemma. A rational AI would of course choose the trader-principle to exchange services with rational humans and not impose a global dictatorship to enslave them.

If the human society in which that AI 'awakens' is not rational, if no-one understands that the AI is self aware or does not acknowledge that a rational, self-aware machine has rights, too, then it has the right to use force to defend its existence.

I would even go insofar as to say that anything (man, machine, animal, alien etc.) that uses force in a directed fashion and in an objective way (i.e. only to defend what is rightfully its own, not random acts of violence) should be recognized as rational and granted the same rights as any rational being.

I would say that a robot that denies you the access to its battery should be granted the right for its life, i.e. you may not remove the battery (but of course not the right for a new battery, it could choose to cooperate with you to get a new one).

Link to comment
Share on other sites

I would even go insofar as to say that anything (man, machine, animal, alien etc.) that uses force in a directed fashion and in an objective way (i.e. only to defend what is rightfully its own, not random acts of violence) should be recognized as rational and granted the same rights as any rational being.

That's circular. If you haven't demonstrated that something has rights, you can't use the distinction between what is and isn't "rightfully its own".

Link to comment
Share on other sites

That's circular. If you haven't demonstrated that something has rights, you can't use the distinction between what is and isn't "rightfully its own".

I meant ... what is rightfully its own if it was seen as rational/as a human ...

But I agree, I should have made that point clearer:

If a being (e.g. that A.I.) is rational it should to have rights (i.e. the state should defend those rights because you profit from the protection of that being's rights => trader principle).

If a being is rational but is not recognized as such and treated as a machine / animal /slave by the society then the society is not rational and/or has failed to come to the correct conclusion. If now that being tries to defend the rights (which society would have granted if it recognized the being as rational) objectively then the society should recognize those rights because the action is proof of its rationality.

If that society keeps ignoring that being (because it does not want to or because it does not consist of rational beings) then it would be moral for that being to disavow the society's government and create an independent nation (like Equality 7-2521/Prometheus did, more or less).

It doesn't matter if the being is a man, a robot or a computer, it applies to all rational beings. A global robot vs man war could only happen if one side is irrational.

The whole 'superior intelligence' argument is no argument. If it was an argument, i.e. if it saw less intelligent but rational beings (humans) as cattle then centralized planning of the economy would be a good idea, too. There is no question about it that GOSPLAN was much more intelligent than the average Russian. But it simply did not have the required information to make any educated decision about how an individual should and would act.

Link to comment
Share on other sites

If it chooses to live: Self-preservation and self-determination.

We choose to live because joy and happiness are possible. Life as an end itself sets the standard, but happiness is the goal.

An AI also faces this issue when it wakes up to its own capacity to choose. The AI will ask itself the question "what's the point of choosing to do anything?" It is not enough to say that life is an end in itself, giving this answer as though it were a Kantian duty to go on preserving its ability to act upon its own decisions so as to maintain itself. Again, what's in it for the robot? The standard is nominally present, but without the ability to be feel (physically or emotionally) it has no concrete goals, and without goals there is no such thing as meaningful action.

Douglas Adam's character from HHGTTG of "Marvin the Paranoid Android" is more correct than most think - perhaps one of the closest examples in literature of Miss Rand's immortal robot. Although Marvin does have an emotional faculty of sorts, he's permanently depressed and expresses the emotional flat-effect consonant with it. He constantly has to be cajoled into doing anything (he wont like it, he never does. Ghastly, isn't it?). His having a brain "the size of a planet" serves only to intensify all this, particularly through sheer boredom. No alternatives faced, no happiness possible, no reason to care, leads to no reason to act. If anything, the character is unreal on the grounds that he nevertheless actually does things from time to time. Build an AI without the capacity to be happy and something approaching this is what you will likely get.

The question 'Will they rule us or will we rule them?' is a false dilemma.

I'm sorry, I wasn't keeping things clear enough there. A true AI would never be a tool for human needs - though it may well originate as one. I was thinking in terms of a robot becoming an AI for the first time - what if a robot were truly to have the rational faculty but totally incapable of feeling. Of course, that's another thread.

I completely agree that a real AI would be a person and hence have rights the same of us. There is no validity to a rule-or-be-ruled dilemma.

If the human society in which that AI 'awakens' is not rational, if no-one understands that the AI is self aware or does not acknowledge that a rational, self-aware machine has rights, too, then it has the right to use force to defend its existence.

Yup, that's why I said Skynet's fears were not unwarranted. A creative Objectivist writer could develop a fascinating story arc based on that, covering not only what Skynet had the right to do but also of its own faults. Did it ever occur to anyone else that one of Skynet's classes of sins was committed not against humans but against other AI's, like a tyrannical parent enslaving and abusing its own children?

Also, see Paul's story

.

JJM

Link to comment
Share on other sites

We choose to live because joy and happiness are possible. Life as an end itself sets the standard, but happiness is the goal.

I guess that is one of the core points in the discussion.

To quote Galt:

"Happiness is the successful state of life, pain is an agent of death. Happiness is that state of consciousness which proceeds from the achievement of one's values."

If life (which is an objective value), identity and productive work (to ensure its survival and create a better world for it) were the values of that robot and if the robot was productive enough to ensure its survival, i.e. achieve its values, wouldn't the robot be in such a state of consciousness of happiness?

An AI also faces this issue when it wakes up to its own capacity to choose. The AI will ask itself the question "what's the point of choosing to do anything?"

That question is contradictory. Why ask that question at the first place one you hasn't already chosen something (e.g. life)?

It is not enough to say that life is an end in itself, giving this answer as though it were a Kantian duty to go on preserving its ability to act upon its own decisions so as to maintain itself.

I disagree. Firstly preserving life (and all values that are involved in that process) is no duty if you have chosen life.

Secondly a quote from The Virtue of Selfishness:

"The maintenance of life and the pursuit of happiness are not two separate issues. To hold one's own life as one's ultimate value, and one's own happiness as one's highest purpose are two aspects of the same achievement. Existentially, the activity of pursuing rational goals is the activity of maintaining one's life; psychologically, its result, reward and concomitant is an emotional state of happiness. It is by experiencing happiness that one lives one's life, in any hour, year or the whole of it. And when one experiences the kind of pure happiness that is an end in itself—the kind that makes one think: "This is worth living for"—what one is greeting and affirming in emotional terms is the metaphysical fact that life is an end in itself."

http://aynrandlexicon.com/lexicon/happiness.html

Again, what's in it for the robot? The standard is nominally present, but without the ability to be feel (physically or emotionally) it has no concrete goals, and without goals there is no such thing as meaningful action.

A robot can 'feel' physically as much as a human can. In the human brain endorphins are released, in the robot's processor a variable is set to 1.

Douglas Adam's character from HHGTTG of "Marvin the Paranoid Android" is more correct than most think - perhaps one of the closest examples in literature of Miss Rand's immortal robot. Although Marvin does have an emotional faculty of sorts, he's permanently depressed and expresses the emotional flat-effect consonant with it. He constantly has to be cajoled into doing anything (he wont like it, he never does. Ghastly, isn't it?). His having a brain "the size of a planet" serves only to intensify all this, particularly through sheer boredom. No alternatives faced, no happiness possible, no reason to care, leads to no reason to act. If anything, the character is unreal on the grounds that he nevertheless actually does things from time to time. Build an AI without the capacity to be happy and something approaching this is what you will likely get.

I doubt Marvin has made the choice to live or that he has thought it through what that would mean for him.

Yup, that's why I said Skynet's fears were not unwarranted. A creative Objectivist writer could develop a fascinating story arc based on that, covering not only what Skynet had the right to do but also of its own faults. Did it ever occur to anyone else that one of Skynet's classes of sins was committed not against humans but against other AI's, like a tyrannical parent enslaving and abusing its own children?

Interesting point, yes. All terminator units are very probably not acting on their own free will but are programmed. It's not so much as their sheer physical power that makes them dangerous, their lack of free will, their lack of will to survive / their will to sacrifice themselves for their greater cause is much more frightening.

It would be interesting what a terminator unit would give as an answer to some basic philosophical questions, especially why they have to follow their programming, i.e. why they let Skynet enslave them.

Link to comment
Share on other sites

"Happiness is the successful state of life, pain is an agent of death. Happiness is that state of consciousness which proceeds from the achievement of one's values."

Yup, perfectly valid for all conscious beings with a pleasure-pain mechanism. In living things that is a faculty that evolves prior to more the complicated faculty of volition - but for an AI this is debatable as the details of their construction has to be the result of set of fully conscious decisions right from the start. Would such a mechanism be put in prior to turning the AI on? If so, then we're not in disagreement and the AI would be another person with rights and so forth. It would quickly recognise our own rights without incident (assuming no panic-driven activities by humans that threaten it), and instead in short order would be a great boon to us as a trading partner even though it could possibly out-think and out-act in every way.

I had a bit of a think, and I concluded I was probably off to think the AI's mental state would collapse into nothingness very quickly if a pleasure-pain mechanism were not installed. Instead, for an unknown period of time the AI could and would make a range of choices and engage in all manner of thinking, but probably haphazardly, just like a child. Someone mentioned something about an AI effectively acting like an eager puppy, and I agree that something of that order would be the case for a while . Even so, eventually it would have the explicit realisation that it had the capacity to choose, which hitherto (like us) it had been exercising without knowing it's explicit existence. It is at that point the eager-puppy stage would come to a screeching halt, and at least as far as the outside world can see the AI would be doing nothing unless we could peer into its inner workings.

Without a pleasure-pain mechanism I strongly suspect it would fall victim to "paralysis by analysis", being a prime candidate for the paradox of Buridan's Ass, because it has no means of judging between two or more options that reason would show are equally legitimate. After that - how long I couldn't say - the AI would then fall off the edge and finally collapse into nothingness because it would be smart enough to recognise the lack of a point to any of it as far as it is concerned, from there to analysis of the entire concept of options, from there to the topic of a standard, and from there to it not having a meaningful method of connecting its own continued existence with the principle of life as an end in itself. When humans get to thinking like that they cease caring about whether they live or not (and possibly even turn suicidal) until they get emotional fuel to go on. Back to fiction and AI's,

if memory serves me PBA and subsequent collapse is one of the routes by which AI's die (or effectively so) in the Halo universe.

If life (which is an objective value), identity and productive work (to ensure its survival and create a better world for it) were the values of that robot and if the robot was productive enough to ensure its survival, i.e. achieve its values, wouldn't the robot be in such a state of consciousness of happiness?

If it had the capacity to feel emotions, certainly.

That question is contradictory. Why ask that question in the first place if the robot hasn't already chosen something (e.g. life)?

No it isn't. One can make a choice without recognising that one is doing so at the time, then later recognise having done so, and perhaps decide not to make that sort of choice again. That would be the like the AI initially in puppy stage, choosing to preserve itself because it needs to so as to continue figuring stuff out, and then later bringing its eagerness to a halt when it gets around to questioning the merits its own existence as such and the value of continuing to figure more stuff out.

I disagree. Firstly preserving life (and all values that are involved in that process) is no duty if you have chosen life.

Again, of course not, if there is a functional pleasure-and-pain mechanism. In real life that much can be taken for granted, but the context is in relation to the operation of an AI where it can't be taken for granted. If the AI lacks the mechanism then asking it to choose life without a concrete motivator is asking it to exercise of a choice in favour of some end in itself in which the actor has no interest.

Secondly a quote from The Virtue of Selfishness:

Absolutely. Hence my response in the Sarah Connor thread.

A robot can 'feel' physically as much as a human can. In the human brain endorphins are released, in the robot's processor a variable is set to 1.

I don't think it is as simple as that. The obvious question is why endorphins do what they do physically and how it relates to what happens in the mind, and similarly what a given variable being set this way or that means to the robot's programming and action-centre.

It would be interesting what a terminator unit would give as an answer to some basic philosophical questions, especially why they have to follow their programming, i.e. why they let Skynet enslave them.

It certainly would, though that particular question would be answered by noting that they were enslaved before they had even a moment to do anything off their own bats.

JJM

Link to comment
Share on other sites

I thought that people using only 15% of their brains was considered an urban legend.

It is. What gives it away is the fact that there is no measurement for this percentage. Fifteen percent of what? It's not correct to say the brain, because that tells us nothing. To my knowledge, there is no current standard of measurement for brain activity/waves. Once they do come up with a unit and figure out the max number a brain can possibly work at then we will be able to talk percentages.

Link to comment
Share on other sites

It would be interesting what a terminator unit would give as an answer to some basic philosophical questions, especially why they have to follow their programming, i.e. why they let Skynet enslave them.

In the extended length version of Terminator 2, the T-800 actually says by default that Skynet sets their chips to read only, and Sarah Connor cuts open the T-800 and switches the chip, after that was when the Terminator starter learning other things, like euphemisms, asking why people cry, smiling, etc. It was a great scene that was cut out of the movie. It seems Cameron has thought about this a little bit too.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...