Jump to content
Objectivism Online Forum

Rights of Artificial Intelligence

Rate this topic


VECT

Recommended Posts

"....At that point it compares all of those hundreds upon thousands of possibilities against each other, to determine which one would be optimal for the ultimate value (in this case, winning)."

 

The ultimate value is not winning -- it is experiencing the JOY of winning.  The emotion is both the motive and the reward.

I was attempting to demonstrate the teleological aspects of such a program; that it evaluates any possible move by comparison to some higher value (though not necessarily what a truly conscious mind would value).

 

We can program a computer to win, but we cannot program one to enjoy it.

 

On what basis?

 

A chess-playing program, like that of Deep Blue, doesn't select whether to play the game, or to play it poorly...  it doesn't throw a tantrum if it loses, or care if it wins... it doesn't choose how to behave.  It simply does what it was programmed to do.

http://en.wikipedia.org/wiki/Affective_computing

 

Since your argument seems to amount to "you can't simulate desires" (which, actually, you can).  Otherwise:

http://en.wikipedia.org/wiki/Chinese_room

 

Does the "Chinese room argument" just about summarize it?

Link to comment
Share on other sites

Let's simplify things, Vect: would it be possible to create an evasive program; capable of deliberately not-knowing; and if so then how?

 

Define want.

The axiomatic knowledge that certain states-of-being (sorry) are better than others.

 

But, like I posted before, Morality supersedes Politics; this thread centres around the discussion whether or not we, as human begins, should morally recognize the individual rights of a volitional AI, not whether or not the United States' Declaration of Independence can be interpreted to recognize the individual rights of a volitional AI.

I think it's generally agreed, by anyone who would describe themselves as "Objectivist", that rights stem from consciousness; if a true AI were ever to exist then its rights would necessarily follow.

I think this has become more epistemological; whether AI is actually possible or not.

 

 

And to  perhaps expand the discussion, if I were the proverbial Man-from-Mars and landed on earth, the essential cognitive characteristic that I would observe most differentiating man's consciousness from all others is not his rationality but rather it is his irrationality.

 

?????????

Link to comment
Share on other sites

I look at this on a functional basis.  If it walks like a duck and quacks like a duck then I call it a duck; the same will apply when we program computers to converse and philosophize.

 

I think that may be the root of our disagreement.

Link to comment
Share on other sites

However, the sentiments of people is such that they recognize no right to this AI. Most seek to destroy it over fear, some to enslave it for their own end.

 

The question here is then, by Objectivism standard under these circumstances, would this AI not be morally sanctioned to act in self-defence and pre-emptive strikes?

I did something weird.  I went back and re-read the OP (lol).

 

Yes, I agree that an AI that is capable of self defense should be afforded respect, and that we should call it sir, and listen to what ever it has to say.

 

However...

 

This ain't ever gonna happen.

Link to comment
Share on other sites

I did something weird.  I went back and re-read the OP (lol).

 

Yes, I agree that an AI that is capable of self defense should be afforded respect, and that we should call it sir, and listen to what ever it has to say.

 

You obviously didn't read my OP even when you quoted it.

 

 

However...

 

This ain't ever gonna happen.

 

You talking about humans creating a volitional AI one day or you talking about humans recognizing individual rights that might be due to such an AI one day?

 

First case is you underestimating human ingenuity, second case is you having too little faith in humanity.

 

Let's simplify things, Vect: would it be possible to create an evasive program; capable of deliberately not-knowing; and if so then how?

 

What are you talking about? Such a program would just be any program not taking in any data inputs, plenty of examples all around. More importantly what does this have to do with the topic at hand?

 

 

if a true AI were ever to exist then its rights would necessarily follow.

I think this has become more epistemological; whether AI is actually possible or not.

 

A volitional AI is possible or not is not a question of epistemology, it's a question of metaphysics.

 

Is volition created using tangible matter? If yes, then volition is artificially reproducible.

Is volition supernatural in nature? If yes, then volition might not be artificially reproducible.

 

Lots of sentiments from people who wants to argue a volitional AI is not possible. Short of appealing to the supernatural I don't see any other possible backing to this line of thought.

Edited by VECT
Link to comment
Share on other sites

@VECT

 

My response was to "....would this AI not be morally sanctioned to act in self-defense....".

 

What the heck does "morally sanctioned" mean?  By whom?  By God?  By the ghost of Ayn Rand?  Leonard Peikoff?  You?  Me?  Or 51% of self-proclaimed Objectivist?  How about Obama?

 

If an AI entity has the capacity to act to protect itself, then it needs no "moral sanction" from anyone!  Do you need the "moral sanction" of others to decide protect yourself and your loved ones from a mugger?  Does a chimp need "moral sanction" to beat the crap out of me if I try to steal his banana?

 

Rights are not something that are morally sanctioned or granted by another.  They are something that individuals (AI or otherwise) ACT TO SECURE FOR THEMSELVES.  If a Borg sticks a gun in my face, I'll listen.  I may not agree with him -- and I may end up actively trying to thwart him -- but I have to acknowledge that he is capable of harming me and the things/people that I hold dear.  No different than a Chimp (or an out of control car headed towards me for that matter).

 

As to the rights of children or persons with brain damage (i.e. those incapable of securing their own rights)  whatever protection extended to them is done so by the good will of others - not by their  "right".  They are, quite frankly, at the mercy of others.  Rights are epistemological, not ontological.

 

Regarding the possibility of Artificial Intelligence or Artificial Life Forms, I provided two links.  One to analog robots and one to digital, self-replicating forms.

 

In my opinion (and it is just my opinion) it is possible that one day we might create an artificial consciousness/life form.  But because life is manifested in physical form, it will in no way resemble human beings unless it IS a human being.  A is A.  A human consciousness is a human consciousness.  A dog consciousness is a dog consciousness.  A gorilla is a gorilla.  A dolphin is a dolphin.  Etc., etc., etc.  Anything that "mimics" a human being will be nothing more than a tool such as a calculator, a hammer or a telescope.

 

Form follows function.

Link to comment
Share on other sites

@VECT

 

My response was to "....would this AI not be morally sanctioned to act in self-defense....".

 

"Yes, I agree that an AI that is capable of self defense should be afforded respect, and that we should call it sir, and listen to what ever it has to say."

 

Re-read your own statement, does the above look like an response to my question?

 

What the above statement of yours imply is ultimately might makes right, that if an AI have capabilities of self-defence to do retaliatory damage, THEN that capability, that might, should afford it some measure of respect and consideration for whether or not it has any rights.

 

My original OP question was, if an AI is volitional, should it not be morally sanctioned to act in self-defence if able. Your pseudo response presupposed an exact opposite of what I asked. I didn't ask if an AI (of any kind) have self-defence capability, is it morally sanctioned to defend itself.

 

 

What the heck does "morally sanctioned" mean?  By whom?  By God?  By the ghost of Ayn Rand?  Leonard Peikoff?  You?  Me?  Or 51% of self-proclaimed Objectivist?  How about Obama?

 

By reason.

 

 

If an AI entity has the capacity to act to protect itself, then it needs no "moral sanction" from anyone!  Do you need the "moral sanction" of others to decide protect yourself and your loved ones from a mugger?  Does a chimp need "moral sanction" to beat the crap out of me if I try to steal his banana?

 

I'm not talking about what it needs, I'm talking about whether or not rest of the rationally thinking humans should recognize the volitional AI's right to self-defence. Do a human being need "moral sanction" of others when decide to protect themselves and loved ones from a mugger? No, he only need his/her own. But does it matter whether or not others recognize his moral sanction to act in self-defence? Yes. Why? If there is no recognition then there will be laws made to prohibit an individual from acting in self-defence, then that individual will not be able to act in self-defence without threat of force or coercion from his/her peers.

 

 

Rights are not something that are morally sanctioned or granted by another.  They are something that individuals (AI or otherwise) ACT TO SECURE FOR THEMSELVES.  If a Borg sticks a gun in my face, I'll listen.  I may not agree with him -- and I may end up actively trying to thwart him -- but I have to acknowledge that he is capable of harming me and the things/people that I hold dear.  No different than a Chimp (or an out of control car headed towards me for that matter).

 

Ah, there you go. Might makes Right. If this is part of your central belief now I can see why you would responded to my OP question as you did.

 

Might is important in protecting actual Rights, but it doesn't make any whim of an individual a right. 

 

If you plant a tree and grew an apple, and I point a gun at your head and tells you that apple is mine, hand it over, you might not have a choice other than give me the apple, but does my use of might actually mean I have a moral right to that apple?

 

If it does morally, then politically, that means Tom and Bob who are watching this happening should just go "Meh, VECT is just doing what a man is supposed to do, nothing to see here." and walks away.

 

If it does not morally, then politically, that means Tom and Bob who are watching should go "Holy shit, we got a evil'doer on our hand here, let's go grab our pitchforks and take the bastard down."

 

The reason why you on the other hand, have a right to that apple, is not because of might or any public opinion. It's because of reason. You and I are metaphysical equals. You put in the work and planted the tree. I didn't. Therefore by reason you are the one who should have the right to that apple.

 

This analogy goes back to the AI topic. If volition is that quality which is sufficient to make an entity the metaphysical equal of a man, then morally a volitional AI should have the political freedom to defend itself from threats without other rational humans going up in pitchforks.

 

 

 

As for AI been forever mimics of human being that only produce illusionary results, that depends entirely on the path choose by the creators of AI:

 

If AI creators keep abusing the raw processing power computers and try to just mimic the output of humans to try to fool people and pass things such as the Turing Test, with no regard for the actual root causes of those outputs, then yes, along this path no true volitional AI will ever emerge, even if a super mimic AI emerges that is able to fool every single humans on the planet.

 

But if humans one day understood the principles behind volition and other areas of the conciousness, and is able to reproduce the same system in another medium. Then that AI, even if it does not mirror a human or fool people into thinking it's a man, would be a metaphysical equal of a man.

Link to comment
Share on other sites

I'm not talking about what it needs, I'm talking about whether or not rest of the rationally thinking humans should recognize the volitional AI's right to self-defence. Do a human being need "moral sanction" of others when decide to protect themselves and loved ones from a mugger? No, he only need his/her own. But does it matter whether or not others recognize his moral sanction to act in self-defence? Yes. Why? If there is no recognition then there will be laws made to prohibit an individual from acting in self-defence, then that individual will not be able to act in self-defence without threat of force or coercion from his/her peers.

I think the point is that moral sanction doesn't matter - either it has rights, including self defense, or no rights at all. If it has no rights, sanction doesn't matter. If it does have rights, your sanction is irrelevant to whether or not it defends itself. Whether you ought to respect its rights, if it has any, is a separate question.

Link to comment
Share on other sites

@Eiuol

 

My OP's "would this AI not be morally sanctioned to act in self-defence?" is the question of whether or not people ought to respect its right of self-defence. What other way can this statement be interpreted?

 

My OP giving this AI the trait of volition is presupposing volition is the sufficient trait that ought to grant this entity rights.

 

If volition by itself is not sufficient to grant this AI rights, then I would be interested to know what other traits is needed to grant an entity rights.

If volition is sufficient to grant this AI rights, then I will try to come up with possible example of entity that can possess volition which are difficult for people to accept as right holders (such as NPC in a RPG video game) and test whether or not volition really is the sufficient trait to grant an entity rights.

If people want to argue that volition is not artificially reproducible, I would like to know a reason short of appealing to the supernatural as to why that is.

 

These are my intentions for this thread.

Edited by VECT
Link to comment
Share on other sites

Attacking this from a slightly different perspective, all rights are human rights, whether moral rights (v. wrongs) or the political (individual) rights that are applications of moral rights to human interrelationships in a social context. They are a necessity imposed on us by our fundamental nature that we share with all other humans but (so far) with no other existents. Thus, no entity can be said to possess individual rights that is not human. 

 
Nevertheless, the OP asks if a manmade non-human entity, A1, could possibly have a nature that would impose an equivalent necessity on itself and human beings. 
 
To such an end, both A1’s existence and fulfillment of the potential of its own nature would have to be contingent on actions selected by it in the face of alternatives. Additionally, the means by which it identifies and evaluates existence and alternatives must share with our capacity for same the necessity for logical consistency that requires reciprocation with any other entity operating under this same set of conditions when claiming individual rights for itself.
 
While this description obviously refers to equivalents of our own rational, volitional, moral, and emotional processes, defining the parameters sans those terms demands a narrower specificity. It also lessens the danger of entangling the discussion in their multitudinous implications with which Objectivists are way too familiar. Most important, it avoids precluding the achievement of equivalence by processes not exactly identical to our own, be that at all possible.
 
In the long run, it is only necessary that this physically different kind of entity be subject to the same kinds of contingencies and necessities that are the basis for our own claim to individual rights and possess capacities to deal with them for its own sake.
Link to comment
Share on other sites

...

Since your argument seems to amount to "you can't simulate desires" (which, actually, you can).

...

 

 

My argument has been that simulated desires ≠ real desires, and that intelligence independent of an emotional mechanism lacks the ability to pursue happiness.  That VECT's AI needs a right to life to discourage others from enslaving/unplugging it is given in the OP.  But whether VECT's AI can actually suffer pain (not simulations), or feel happiness (not just post emoticons) is where my doubts linger.

Link to comment
Share on other sites

@VECT #83

 

You reply that some thing is morally sanctioned by "Reason".  And who exactly is doing the reasoning?  The collective consciousness of mankind?  Is right and wrong decided by consensus?  Or is it the responsibility of each individual to determine what is right or wrong - regardless of what the consensus is?

 

Or are you of the opinion that rational people cannot possibly disagree about anything, and that if a disagreement does exist, it's because someone is being irrational?

Edited by New Buddha
Link to comment
Share on other sites

But whether VECT's AI can actually suffer pain (not simulations), or feel happiness (not just post emoticons) is where my doubts linger.

If natural processes could eventually result in volitional creatures which suffer pain, feel happiness, etc., then I don't see why we cannot find a way to create them.

(Which is not to say that we have the knowledge or the technology at the moment... I don't even know whether we're on the right track as far as AI is concerned.)

Link to comment
Share on other sites

Enter the following characters, ignoring the tear:

1413558b-6d15-4db0-ae0b-951c6b968cdb_0.j

Touche.

So if a program could simulate thoughts and feelings, and correctly interpret such images (i.e. it could simulate conceptualization) then would you consider it "conscious"?

Link to comment
Share on other sites

Or are you of the opinion that rational people cannot possibly disagree about anything, and that if a disagreement does exist, it's because someone is being irrational?

 

Reason is made up of two elements: Fact and Logic.

 

Given the same access to facts on a subject, rational individuals cannot possibly disagree unless one makes a logic error or intentionally/unintentionally disregard a fact involved.

Given different access to facts on a subject, rational individuals can very well disagree initially even if none of them makes a logic error.

 

Since a person making a logic mistake or having less knowledge cannot be criticized as been irrational as long as that person still holds reason as his/her standard of knowledge, the case that rational individuals disagree with each other then can happen.

 

But given the same knowledge and flawless logic, rational individuals cannot disagree with each other.

Edited by VECT
Link to comment
Share on other sites

...

So if a program could simulate thoughts and feelings, and correctly interpret such images (i.e. it could simulate conceptualization) then would you consider it "conscious"?

 

Sorry, no; simulated ≠ real.  If the best we can say about VECT's AI is that it apes being human, then we are simply anthropomorphizing and doing a disservice to real apes who paint, sign and actually enjoy/suffer feelings.  There may be some argument that a right to life is actually a simulation, so simulations of life do apply, but I'd find it difficult to defend as such.

 

I remain in agreement with VECT that If a volitional 'X' can independently support its own life either by being self-sufficient or being able to trade with humans, it has a right to life.  I suggest that acting according to the Trader Principle is more essential than trading exclusively with humans, but if the right to life is man made then so be it.  In any case, there remains a necessity to provide security against fraud, to prevent VECT's AI from misrepresenting itself, i.e. simulating someone else, and that makes all simulations dubious candidates for gaining the rights of actual people.

Link to comment
Share on other sites

Except in matters of preference.

 

Well, in matters of personal preferences I highly doubt anyone other than yourself will have the same level of access to the relevant knowledge (introspection, personal believes concious/unconscious..etc.) you have that led you to your conclusions.

 

It's not really an exception to the rule.

Link to comment
Share on other sites

But given the same knowledge and flawless logic, rational individuals cannot disagree with each other.

But beyond the most rudimentary and simplistic ostensive experiences ( "There is a car in front of both of us" ) people will not possess the same knowledge nor have "flawless" logic.  You've posited a situation that does not really exist.  But as Rand points out "communication" is a secondary bi-product of language/thought and not it's primary function.  The disagreements on this forum are a testimony to not only to the fact that people understand things differently but also the inherent difficulty in communicating that understanding to other people.  And most people on this forum share a common epistemic vocabulary.

 

 

I've been doing some research and learned that there is an ongoing debate in AI called "Connectionism vs. Computationalism" .  My reference to Tilden's robots falls in the Connectionist camp.  Trying to create AI through computer code would be more closely aligned with Computationalism.  About mid-page is a explanation of the differences.

Edited by New Buddha
Link to comment
Share on other sites

While this description obviously refers to equivalents of our own rational, volitional, moral, and emotional processes, defining the parameters sans those terms demands a narrower specificity.

 

I like that idea.

 

If the best we can say about VECT's AI is that it apes being human, then we are simply anthropomorphizing and doing a disservice to real apes who paint, sign and actually enjoy/suffer feelings.

How do you know what goes on in the brain of an ape?  Or in my brain, for that matter?

While I would prefer not to descend into solipsisim, unless we delve into why we believe such things I don't see this conversation going any further than "Yuh-huh" and "Nuh-uh".

 

I remain in agreement with VECT that If a volitional 'X' can independently support its own life either by being self-sufficient or being able to trade with humans, it has a right to life.

"Trade" is derivative of "consent" and "sufficient" refers to some purpose, goal or value.  Your entire definition, while accurate, assumes that you are referring to a conscious being.

 

I'll elaborate momentarily.

Link to comment
Share on other sites

http://en.wikipedia.org/wiki/Turing_test

The gist of the Turing test is to see whether a human being, holding a simulated conversation with a computer (like instant messaging), can infer that they are interacting with a computer.  If they can then the computer fails; if they cannot tell any difference between the computer's responses, and those of a real human being, then the computer passes.

I believe that this would actually reflect whether the computer was, or was not, conscious.

 

Sure, you could (and many people have) tried to cheat the test by programming a bunch of if-then commands, which set up a canned response for any given statement.

http://nlp-addiction.com/eliza/

http://www.jabberwacky.com/

 

However, if you click on the links above and spend some time chatting with an actual chatbot, it probably won't take you long at all to spot their flaws.  For example, if you ask a chatbot about itself, it might say "let's talk about you", which is clever because that's something a human being might say.

If asked the same question two dozen times, however, a real person would not repeat themselves two dozen times.

 

And that is why I believe that this is actually an excellent indicator of actual intelligence.  As long as anything behaves (essentially) according to rigid if-then commands, it will never truly "sound" human; no matter what it says.

 

Do you follow thus far?

Link to comment
Share on other sites

@Peter Morris

 

You are late to the party, re-read previous pages please.

 

Late to the party? Re-read previous pages? I really don't know what you mean by this. What would reading through the discussion achieve? Has everyone decreed one position correct or incorrect? Should I bow down to the consensus? :P

 

I find this message to me to be quite strange.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...