Jump to content
Objectivism Online Forum

Rights of Artificial Intelligence

Rate this topic


VECT

Recommended Posts

This is somewhat of a classic subject in Sci-Fi, but I've been thinking, consider this scenario:

 

As we all know, the processing power of hardwares is increasing by leaps and bounds. Let's suppose in the distant future, a genius successfully programmed a powerful self-learning volitional artificial intelligence and gifted it a server as home with connection to the net. This AI choose peaceful endeavours such as study of astronomy or playing the stockmarket as it's purpose and proceeds to do so without violating anyone else's rights.

 

However, the sentiments of people is such that they recognize no right to this AI. Most seek to destroy it over fear, some to enslave it for their own end.

 

The question here is then, by Objectivism standard under these circumstances, would this AI not be morally sanctioned to act in self-defence and pre-emptive strikes?

Link to comment
Share on other sites

Volition = Cognition = Reason -----> Rights

 

So yes.

 

I'll also add, considering the state of individual rights today, that is not outside the realm of possibility.  While I doubt we'll see AI any time soon since the crux is volition, and how to you program that?  But if so then it follows it would be either assaulted by mystics or turned into a victim by post modernists. 

 

Sounds like a good short story waiting to happen actually. 

Link to comment
Share on other sites

It's interesting to note how top mind such as Stephen Hawking predicates less of a Terminator scene but instead outright human extinction if mankind were to go toe to toe with a successfully constructed self-learning super AI.

 

The other interesting thing to note is does volition really entails reason?

 

In normal human begins, we have a volitional and a rational faculty, and our volitional faculty gives us a choice of either focus and activate the rational faculty, or be unfocused.

 

But that doesn't have to be the case of any man-made AI.

 

Such an AI could possess a volitional faculty and then a series of complex hard coded instinctual programs. And for the sake of argument lets suppose these hardcoded programs consists of self-learning and peaceful activities, but is not similar to the rational faculty of humans as we know it. The volitional choice of this AI is to either activate these programs or remain dormant. Would such an AI that have volition but not true reason be entitled to rights?

 

Vice versa, if an AI is programmed with a rational faculty similar to human, but is not given a volitional faculty to have the choice of been focused or unfocused. This AI would be on full auto 24/7 of been rational. Would such an AI be entitled to rights?

Link to comment
Share on other sites

Such an AI could possess a volitional faculty and then a series of complex hard coded instinctual programs. And for the sake of argument lets suppose these hardcoded programs consists of self-learning and peaceful activities, but is not similar to the rational faculty of humans as we know it. The volitional choice of this AI is to either activate these programs or remain dormant.

 

How would it choose between dormancy and activity, if it could not "think" in the fully human sense?

 

Vice versa, if an AI is programmed with a rational faculty similar to human, but is not given a volitional faculty to have the choice of been focused or unfocused. This AI would be on full auto 24/7 of been rational. Would such an AI be entitled to rights?

 

What would it be constantly thinking about?

 

 

While I doubt we'll see AI any time soon since the crux is volition, and how to you program that?

Recursion.

Link to comment
Share on other sites

A couple of previous, related topics here and here.

 

Thanks for the links; I lost faith in the forum search engine when none of the results on the first page even contained "artificial" or "intelligence" and didn't bother to check subsequent pages. I'll check these two links out.

 

How would it choose between dormancy and activity, if it could not "think" in the fully human sense?

I can then ask you how do humans choose between focusing and un-focusing when the act of thinking only comes after the choice of focusing? This question of yours presupposes that thinking is or have to come before volition.

 

What would it be constantly thinking about?

 

 

That's a good question. Volition then doesn't just start-up the rational faculty but also determines the direction of a reasoning session. I'll think on it.

Edited by VECT
Link to comment
Share on other sites

Thanks for the links; I lost faith in the forum search engine when none of the results on the first page even contained "artificial" or "intelligence" and didn't bother to check subsequent pages. I'll check these two links out.

Across the bottom of the page is a Google custom search that runs over this domain. That's how I found those two links.
Link to comment
Share on other sites

...

 

However, the sentiments of people is such that they recognize no right to this AI. Most seek to destroy it over fear, some to enslave it for their own end.

 

The question here is then, by Objectivism standard under these circumstances, would this AI not be morally sanctioned to act in self-defence and pre-emptive strikes?

 

An Objectivist AI threatened by a non-Objectivist society, eh?

 

To begin with, preemption isn't an act of self-defense.  To threaten without having been provoked is the act of preemption, and once initiated establishes the moral sanction for self-defense.  But the more interesting question is, is your AI actually alive?  There's a moral distinction to be made between using, or turning off a machine, and enslaving or killing a biological organism.  Rights speak to the preservation of life, not hardware.

Link to comment
Share on other sites

An Objectivist AI threatened by a non-Objectivist society, eh?

 

To begin with, preemption isn't an act of self-defense.  To threaten without having been provoked is the act of preemption, and once initiated establishes the moral sanction for self-defense.  But the more interesting question is, is your AI actually alive?  There's a moral distinction to be made between using, or turning off a machine, and enslaving or killing a biological organism.  Rights speak to the preservation of life, not hardware.

None of this is an accurate representation of Objectivism, you're talking out of your ass as usual.
Link to comment
Share on other sites

 

The speaker in this podcast begins by substituting the word preemptive for preventive, which is what was actually asked.  Taking the initiative to attack an enemy only means to attack prior to some future attack, which in war is a regular event.  One still looks to the initiation of hostilities, i.e., the first strike, to determine who is the aggressor and who is the defender, and everything that follows comes after that fact.

In any case, that issue has been discussed in other threads...

Link to comment
Share on other sites

The other interesting thing to note is does volition really entails reason?

In normal human begins, we have a volitional and a rational faculty, and our volitional faculty gives us a choice of either focus and activate the rational faculty, or be unfocused.

An AI would by definition have the ability to use reason. And your OP assumes that it has volition (you ask about whether it would be "moral" for it to defend itself, and morality presupposes volition).

But it doesn't need volition to decide to defend itself, it just needs to be independent of human control, and programmed for self-preservation. Then, the question becomes "would it be moral to program an AI for self preservation"? Personally, I think it would be, why not. Why should I allow others to destroy my creation, if I have the choice to equip it to defend itself?

As far as "giving" an AI some form of volition, I think that would involve replicating emotions, and allowing those emotions to dictate some of the AIs choices. THAT, to me, would be far more dangerous that creating a powerful AI that's always rational. Volition just means that sometimes our actions are caused by something other than a rational evaluation of the facts. Why would you want an AI to do that, except maybe it the goal was to create an exact replica of a human (for companionship or something)?

Edited by Nicky
Link to comment
Share on other sites

...

But it doesn't need volition to decide to defend itself, it just needs to be independent of human control, and programmed for self-preservation...

 

It doesn't need the ability to choose in order to make a choice?

The kind of choice being referred to is the result of programming a response to a if/then statement, i.e., the response is not a choice, it's a command, and therefore non-volitional.

Link to comment
Share on other sites

The speaker in this podcast begins by substituting the word preemptive for preventive, which is what was actually asked.

 

The speaker is the president of the Ayn Rand Institute. That's right, he answers the question: "Is it ethical to perform a preemptive attack?"

 

 

One still looks to the initiation of hostilities, i.e., the first strike, to determine who is the aggressor and who is the defender, and everything that follows comes after that fact.

 

His point in the podcast, which you don't seem to have even addressed in your response, is that hostility can be initiated prior to the first strike. Like when a country expresses its desire to wipe you off of the face of the earth. You don't have to wait until they start to do it.

 

 

In any case, that issue has been discussed in other threads...

 

Yes it has, but the original question was:

 

"The question here is then, by Objectivism standard under these circumstances, would this AI not be morally sanctioned to act in self-defence and pre-emptive strikes?"

 

I think Dr. Brook's view on preemptive strikes is relevant to the discussion.

Link to comment
Share on other sites

...

 

I think Dr. Brook's view on preemptive strikes is relevant to the discussion.

 

Short of appealing to his authority as the final word on this issue, I agree.  However let's keep the horse before the cart.  In order for an AI to have any moral consideration it must be volitional (please see my post #16); by definition computer programs aren't.

Edited by Devil's Advocate
Link to comment
Share on other sites

@Devil's Advocate:

 

Is there are some known fundamental technical limitation that suggest volition cannot be reproduced outside of organic compounds?

 

Also you are equivocating computer program with AI; Such an AI in my example residing in cyberspace is made up of programs just as a human that reside in base reality is made up of flesh and bones. Humans have volition even though the flesh and bones that make up our begin by definition are not volitional.

 

@Nicky

Volition just means that sometimes our actions are caused by something other than a rational evaluation of the facts. Why would you want an AI to do that, except maybe it the goal was to create an exact replica of a human (for companionship or something)?

 

This question actually inspired another interesting thought for me. A very attractive use for such an AI could be in video games, specifically the NPCs in RPG games, which would make these games infinitely more fun. But now the even more interesting question comes up, is there any moral concerns in killing such an AI in a video game that will cause the death of a rational/volitional consciousness similar to a human begin?

Edited by VECT
Link to comment
Share on other sites

But it doesn't need volition to decide to defend itself, it just needs to be independent of human control, and programmed for self-preservation. Then, the question becomes "would it be moral to program an AI for self preservation"? Personally, I think it would be, why not. Why should I allow others to destroy my creation, if I have the choice to equip it to defend itself?

 

Whoa! As intriguing as this first appeared, I now find myself asking: "if I have the choice to program a machine to 'defend' itself.", what if my idea of a program for 'self-defense' includes a parameter that is not ultimately morally defensible?

 

From a sci-fi aspect, adding programming that somehow presumes a "recursive" function to correlate with morality is one thing (i.e. free-will), but to build in a cart blanch function to defend against destruction  . . .

 

Did Isaac Asimov take these kinds of considerations to this level of depth?

Link to comment
Share on other sites

Did Isaac Asimov take these kinds of considerations to this level of depth?

I don't think so, but William Gibson started to get at this deeper level in Neuromancer. Cyberpunk gets into these questions more than Asimov's stories. It gets into more than the human likeness of robots - it goes into an AI superceding human cognitive ability. If and when an AI manages this, well, that's highly speculative.

Link to comment
Share on other sites

@Devil's Advocate:

 

Is there are some known fundamental technical limitation that suggest volition cannot be reproduced outside of organic compounds?

 

Also you are equivocating computer program with AI; Such an AI in my example residing in cyberspace is made up of programs just as a human that reside in base reality is made up of flesh and bones. Humans have volition even though the flesh and bones that make up our begin by definition are not volitional.

 

...

 

I can accept the reality of machine intelligence, but the fundamental issue to be resolved is, is choice programmable? If/then statements don't produce choices, they produce commands, specifically the programmer's commands.. Artificial means not real... what we're talking about is literally not real intelligence.

 

Flesh and bones are the hardware of a real self, CPUs and disk drives are the hardware of a self that by definition, isn't real. I'm a huge sci-fi fan, but there's a threshold here. Are we discussing the real morality of the programmer, or the unreal morality of a necessarily deterministic artificial creation?

Edited by Devil's Advocate
Link to comment
Share on other sites

...

 

From a sci-fi aspect, adding programming that somehow presumes a "recursive" function to correlate with morality is one thing (i.e. free-will), but to build in a cart blanch function to defend against destruction  . . .

 

Did Isaac Asimov take these kinds of considerations to this level of depth?

 

Can free-will have any meaning if one isn't allowed to self destruct?

 

Asimov's I Robot comes damn close.

Link to comment
Share on other sites

I don't think so, but William Gibson started to get at this deeper level in Neuromancer. Cyberpunk gets into these questions more than Asimov's stories. It gets into more than the human likeness of robots - it goes into an AI superceding human cognitive ability. If and when an AI manages this, well, that's highly speculative.

 

Blade Runner ought to get an honorable mention too.

Link to comment
Share on other sites

Whoa! As intriguing as this first appeared, I now find myself asking: "if I have the choice to program a machine to 'defend' itself.", what if my idea of a program for 'self-defense' includes a parameter that is not ultimately morally defensible?

Well, if you're not confident in your moral values, then you shouldn't teach them to anybody, let alone something more capable than a human. But I am confident that my ideas on self defense are morally defensible. And any robot that I program would be more rational than most humans, not less, so I'd be at least as confident instructing it to defend itself as I would telling a human being to do the same. Edited by Nicky
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...