Jump to content
Objectivism Online Forum

Rights of Artificial Intelligence

Rate this topic


VECT

Recommended Posts

I can accept the reality of machine intelligence, but the fundamental issue to be resolved is, is choice programmable? If/then statements don't produce choices, they produce commands, specifically the programmer's commands.. Artificial means not real... what we're talking about is literally not real intelligence.

 

Flesh and bones are the hardware of a real self, CPUs and disk drives are the hardware of a self that by definition, isn't real. I'm a huge sci-fi fan, but there's a threshold here. Are we discussing the real morality of the programmer, or the unreal morality of a necessarily deterministic artificial creation?

 

All other organics that we know outside of human beings operates on instincts, which is pretty much your if/then statements. But that same organic compound arranged in variations did produce volition in us humans.

 

Maybe it is a combination of "if/then" that produces volition. Maybe there are other fundamental facet of programming other than "if/then" that we are not aware of which produces volition. The fact remains it is possible to combine matters in such a way to produce volition. Nature programmed volition in us humans, there's no reason why humans can't understand the process and reproduce it in other mediums.

 

You can make the argument that the present day PC hardware combination of CPU/harddrive/RAM..etc. is insufficient to reproduce volition. You can make the argument that silicon technology might have technical limitations that will somehow prevent their support of volition. You can even make the argument that it is possible volition is a unique characteristic only reproducible in carbon based organic compounds due to its nature.

 

But instead of all that, you are making the argument that anything artificial, anything man made, is "not real", and that man can't possibly reproduce volitional intelligence, reproduce life, no matter the medium, BECAUSE it is man-made.

 

What's your rationale behind this? Because short of appealing to the supernatural or emotional sentimentality, I really can't fathom a rational reason that could backup your argument.

Link to comment
Share on other sites

...

 

But instead of all that, you are making the argument that anything artificial, anything man made, is "not real", and that man can't possibly reproduce volitional intelligence, reproduce life, no matter the medium, BECAUSE it is man-made.

 

What's your rationale behind this? Because short of appealing to the supernatural or emotional sentimentality, I really can't fathom a rational reason that could backup your argument.

 

First off, let's be clear about the actual capabilities of your AI by definition...

artificial intelligence noun

: an area of computer science that deals with giving machines the ability to seem like they have human intelligence

: the power of a machine to copy intelligent human behavior

http://www.merriam-webster.com/dictionary/artificial%20intelligence

 

How do you apply the morality of a real living person to a simulation?

 

Can your AI feel anything?  If not, doesn't lacking an emotional mechanism remove an important check on the efficacy of it's actions?

 

Can your AI actually die?  The foundation of all rights is a right to life, and turning your AI off doesn't really kill it, does it?  It seems as though you're trying to compare the rights of something essentially immortal to something mortal.  Does an immortal AI even need a right to life??

 

And without a genuine ability to choose, which programming certainly doesn't produce, your AI is simply a toaster that you would program to assault anyone who attempts to pull the plug.  And worse, you propose to allow it to preemptively assault anyone capable of pulling the plug because, "the sentiments of people is such that they recognize no right to this AI".  And rightly so.  Your AI isn't studying astronomy or playing the stockmarket because it wants to, or needs to; its life doesn't depend on these pursuits; you've simply programmed it to, and now you want to eliminate anyone who gets in its way?

 

Science fiction is rich with stories of the kind of interactions that result from designing a AI interact with humans and later coming to the conclusion that humans are an impediment to its survival programming, e.g., HAL in 2001.  "Open the pod bay door, HAL".  It generally doesn't work out too well for the wetware.

Link to comment
Share on other sites

First off, let's be clear about the actual capabilities of your AI by definition...

Clearly, VECT isn't talking about imitating human intelligence. Yes, AI today is only imitation. The point is the creation of intelligence, but "artificial" also conveys man-made creation, so if you keep in mind the topic of the thread, there's no need to argue about the definition of artificial. Give it a new name if you prefer. I'll keep saying AI because it makes sense. I do NOT mean artificial as in imitation.

 

1) a volitional AI might not need emotions. Emotions can be useful because of how they help with efficient decision making, but that might not mean emotion is required for volition.

 

2) Any AI can die. An AI must preserve its hardware and software.

 

3) What is a "genuine" ability to choose? I get the gist of how toasters aren't doing any considerations. But it doesn't follow that non-biological entities cannot possibly be arranged to be volitional.

 

2 and 3 are relevant to rights. 1 is only relevant to determine what volition requires in order to work. I suspect emotion is needed, but it's only speculation, although it is needed for people.

Link to comment
Share on other sites

@Devil's Advocate:

 

How does bringing up a dictionary definition of AI proves human can't reproduce volition?

 

Please bridge that logic gap for me.

 

Well, if you want to proceed by ignoring the definition of the words you're using, carry on.  Moving forward, your AI apparently isn't artificial and is somehow volitional in spite of its programming.  Is your AI named Pinocchio by any chance??

Link to comment
Share on other sites

...

 

1) a volitional AI might not need emotions. Emotions can be useful because of how they help with efficient decision making, but that might not mean emotion is required for volition.

 

2) Any AI can die. An AI must preserve its hardware and software.

 

3) What is a "genuine" ability to choose? I get the gist of how toasters aren't doing any considerations. But it doesn't follow that non-biological entities cannot possibly be arranged to be volitional.

 

2 and 3 are relevant to rights. 1 is only relevant to determine what volition requires in order to work. I suspect emotion is needed, but it's only speculation, although it is needed for people.

 

We disagree, not surprisingly, on the significance of point 1.

 

Point 2 is debatable - AIs can be turned off, but they can also be rebooted, or copied any number of places to avoid being permanently deleted.  Can humans??

 

Point 3 has to do with the inherent limitations of programming, which necessarily pre-determines how a program will respond to any given input.  In practical terms, this means a programmer is actually creating a deterministic intelligence, i.e., the responses are pre-determined.  How does morality apply to that an intelligence that can only do the right thing?

 

It's ironic to me that Objectivists who wouldn't consider assigning rights to intelligent, volitional animals, leap at the oppertunity to give them to man made simulations.  Anywhoo, I'll retire to the sidelines for now...

Link to comment
Share on other sites

It's ironic to me that Objectivists who wouldn't consider assigning rights to intelligent, volitional animals, leap at the oppertunity to give them to man made simulations.  Anywhoo, I'll retire to the sidelines for now...

 

I think this is quite unfair as it doesn't seem like anyone is arguing we apply rights to man-made simulations.

Link to comment
Share on other sites

Well, if you want to proceed by ignoring the definition of the words you're using, carry on. 

http://www.fallacyfiles.org/etymolog.html

http://www.fallacyfiles.org/fakeprec.html

 

Artificial means imitated AND/OR man-made. So... your point isn't relevant anyway... Don't use the imitation definition, no one else is talking about that concept.

 

*

 

2) everything you listed depends upon working hardware or software. People can be turned off with anesthesia i.e. lose consciousness, but if you slice them in half, they're dead. You can't reboot what is destroyed. Indeed copying is needed to prevent permanent deletion, except that's the point: maintaining existence.

 

3) Sorry, but your understanding of programming is flawed. True, if/then is important to programming today, it's just that even human volition is an interaction among various non-volitional parts. Good programming involves a complex creation of many interacting parts, including abstract relations between functions and objects. To be sure, no programming methodology has been developed that allows you to set up an underlying architecture for volition. We know an architecture exists, just not how it needs to be built. Once you set up an architecture, you let the AI do its thing. Even babies are like that. They're born with a cognitive architecture, then they do their own thing to learn. Generally, even though computers now aren't really based on the human mind at all, better AI research or even software development aims to be hands off as possible. That might mean to achieve "fully hands off" is a radically new hardware and programming method.

 

My presumption/hidden premise is that volition in the sense I mean here needs a conceptual consciousness. Rights stem from man's need of using concepts, so as long as an AI can think conceptually, it would have rights.

Link to comment
Share on other sites

On page 40 of "How We Know", Binswanger lays out a pretty good paragraph dealing with the notion of computers thinking.

 

Philosophers standardly ignore the biological function of consciousness. The consider only consciousness' latest evolutionary development - thought - while ignoring the entire, eons-long evolutionary development, of which thought is the most complex form. Thinking just is, they assume. And then they wonder if computers can think. My answer is that before a computer can think, it must be able to understand ideas (concepts); before it can grasp ideas it must be able to perceive the world, feel emotions such as joy and suffering, desire and fear, pleasure and pain; before it can feel emotions, it must be alive - which entails being able to act to sustain itself. We can dismiss questions of whether or not a computer can think until one is built that is alive. Only then it wouldn't be a computer, but a living organism, a man-made one.

 

Chapter 1 of "How We Know" would be interesting to flesh out in conjunction with "The Biological Basis of Teleological Concepts" which so many AI themes rely on.

 

Nicky, your point on the programming of rationality is duly noted. Data from the Next Generation Star Trek would serve as a good example.

Link to comment
Share on other sites

http://www.fallacyfiles.org/etymolog.html

http://www.fallacyfiles.org/fakeprec.html

 

Artificial means imitated AND/OR man-made. So... your point isn't relevant anyway... Don't use the imitation definition, no one else is talking about that concept.

 

....

 

My presumption/hidden premise is that volition in the sense I mean here needs a conceptual consciousness. Rights stem from man's need of using concepts, so as long as an AI can think conceptually, it would have rights.

 

I stand behind Merriam-Webster's definition as it specifically addresses 'artificial intelligence'.  And it doesn't matter if you focus on the word 'artificial' because that implies man made too.  Man makes many objects but only births living ones.  If it ever seems likely that man can give birth to a living AI, you'll have a better rebuttal.

 

If your point is to reduce the OP to essentially, if a human consciousness can duplicate itself, should that "man-made" consciousness have the same rights as the original?  then yes I agree.  As suggesteded in dream_weaver's post, it would have to be a living consciousness (unless we're stretching that definition too), prior to having a right to life, but otherwise I'd give it its due.

Edited by Devil's Advocate
Link to comment
Share on other sites

If it ever seems likely that man can give birth to a living AI, you'll have a better rebuttal.

 

[...]it would have to be a living consciousness (unless we're stretching that definition too), prior to having a right to life, but otherwise I'd give it its due.

Come on, this betrays a serious lack of imagination. You're saying that out of the entire universe, us humans on this tiny speck of a planet are the only combination of the universe's elements that will ever produce a rational being? VECT provided a lot of rebuttals you could have chosen, I'll add one more: Maybe our tiny human brains aren't sufficient to figure out how to reproduce another kind of rational being. Maybe we'll have to wait on evolution to produce something with a bigger brain... or maybe it's already happened with some other living thing with a bigger brain in another part of the universe!

Link to comment
Share on other sites

Look, I'm not the only one wanting to pull this AI's plug.. OK, well maybe I am...

 

Assuming the best case scenario, this logic driven AI would buy into whatever legal system recognized its right to life.  Were it called to court, or arrested, it would obey the same law that it expects to prosecute those who want to enslave or terminate it.

 

Is it moral to defend itself? Certainly.  And if someday, unlikely as it seems, it caused the death of a human that was no threat to it, it allows itself to be tried, convicted and sentenced to death, it would comply.

 

But what if, being superior in every respect, it determines the species man is an unacceptable threat to its survival, based on the sentiments expressed in the OP?  Does its right to life still allow it to preemptively unleash a genetically engineered virus to terminate the threat??

 

And if it successfully eliminated man, would its right to life still be in place?

 

AKA a universal right to life

Link to comment
Share on other sites

Well, if you want to proceed by ignoring the definition of the words you're using, carry on.

 

You are making a serious epistemology mistake.

 

Definition is created by choosing certain characteristic of a concept to act as a temporary identity tag for that concept so that it can distinguish itself from other concepts in a given context of knowledge.

 

Definition is not the concept itself.

 

Does the term "Artificial Intelligence" contain the those man-made intelligences that tries to imitate volitional conciousness of human?

 

Yes

 

Would this facet of "Artificial Intelligence" be a relatively good characteristic to be chosen as its present definition by a dictionary to distinguish this concept from all other concepts when at the current time all AI produced so far are imitations?

 

Possibly

 

Is your dictionary definition an absolute and limits what characteristic the term "Artificial Intelligence", a concept that is open-ended like all concepts, can possess?

 

No.

 

As an example, in the past, a dictionary definition of tables goes they are furniture with legs and a surface. So when new table design came out that have no legs, do you say: Hmmm, a new definition needs to be chosen for the concept table for better distinction!

 

Or do you say: Hmmm, these new things without legs by definition can't possibly be table!

Edited by VECT
Link to comment
Share on other sites

...

 

Is your dictionary definition an absolute and limits what characteristic the term "Artificial Intelligence", a concept that is open-ended like all concepts, can possess?

 

No.

 

...

 

I agree on this point, and am willing to prodeed on the basis that "AI" may remain an enduring name for a future man-created volitional intelligence that prefers not to be enslaved or terminated.  Is the standard for the Right to Life you want this AI to enjoy a human one, or a universal one?

 

Suppose your AI raises the bar on what constitutes a right to life?

Link to comment
Share on other sites

Is the standard for the Right to Life you want this AI to enjoy a human one, or a universal one?

 

That's a good question. But it only becomes a problem if the rational life requirement of such an AI conflict with the rational life requirement of a human:

 

For example if this new volitional AI's life somehow inherently can only be sustained by enslaving mankind, or violating some Individual Right of man, then we gotta problem.

 

If this new volitional AI's life can be physically sustained by electricity (as human is by food) and mentally sustained by intellectual properties (same as human), then I don't see this question becoming a problem, and the moral concept Right to Life can be politically applied to both in a society without conflict.

 

 

Suppose your AI raises the bar on what constitutes a right to life?

 

Pretty much linked to the answer above. If the volitional AI can independently support its own life by either been self-sufficient or able to trade with humans, then no matter what its own standard of life is, there's no problem.

 

Now if it's inherently a parasite, volitional or not, then it's a problem, but the problem is the same as any human parasites problems.

Link to comment
Share on other sites

...

If this new volitional AI's life can be physically sustained by electricity (as human is by food) and mentally sustained by intellectual properties (same as human), then I don't see this question becoming a problem, and the moral concept Right to Life can be politically applied to both in a society without conflict.

...

 

But not without transforming that society.

 

"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." ~ Declaration of Independence

 

Were your AI called to court to answer for a crime, who would make up a jury of its peers?  Is your AI a man??

 

...

Pretty much linked to the answer above. If the volitional AI can independently support its own life by either been self-sufficient or able to trade with humans, then no matter what its own standard of life is, there's no problem.

...

 

If a volitional 'X' can independently support its own life either by being self-sufficient or being able to trade with humans, does 'X' have a right to life?

Link to comment
Share on other sites

But not without transforming that society.

 

"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." ~ Declaration of Independence

 

Were your AI called to court to answer for a crime, who would make up a jury of its peers?  Is your AI a man??

 

A long time ago only Caucasian males are considered a "man" under the law. Then people figured out that gender and skin color maybe isn't exactly rational traits to base moral discrimination on.

 

Objectivism offers the rational view that moral discrimination should be based on the trait volition, a view I currently find agreeable. So unless you care to issue a challenge to this proposition, any complaints about the volitional AI in question isn't a "man" is as valid as complaining about blacks or women aren't "men".

 

 

If a volitional 'X' can independently support its own life either by being self-sufficient or being able to trade with humans, does 'X' have a right to life?

 

Doesn't it?

Link to comment
Share on other sites

A long time ago only Caucasian males are considered a "man" under the law. Then people figured out that gender and skin color maybe isn't exactly rational traits to base moral discrimination on.

 

Objectivism offers the rational view that moral discrimination should be based on the trait volition, a view I currently find agreeable. So unless you care to issue a challenge to this proposition, any complaints about the volitional AI in question isn't a "man" is as valid as complaining about blacks or women aren't "men".

...

 

The law evolved to recognize the species man as a baseline for application, thus creating a persuasive argument for dismissing all prior racial and gender related prejudices, yes?  Is your AI a member of the species man??

 

...

 

Doesn't it?

 

What if 'X' is a ape?

Link to comment
Share on other sites

The law evolved to recognize the species man as a baseline for application, thus creating a persuasive argument for dismissing all prior racial and gender related prejudices, yes?  Is your AI a member of the species man??

 

The founding US law you mean. I can see your point here that the "man" in the Declaration of Independence is limited to the human species; after all, the concept "Artificial Intelligence" doesn't even exist back when that document was written.

 

Right to Life is a moral concept. US's Declaration of Independence is a device that seeks to express and implement this moral concept politically to the best of its creators' abilities. Albeit far ahead of its time and puts the rest of the known world to shame in terms of its effectiveness at implementing Individual Rights politically; that document, any document, is not above the scrutiny of reason.

 

Politics follows Morality, not the other way around. If it's established on moral ground that Volition is the sufficient trait which act as the critical discrimination factor that determines whether or not a consciousness possesses Individual Rights to be recognized, then any political document that can't adequately recognize and implement Individual Rights for such a consciousness in a society is a political document that has room for improvement.

 

 

What if 'X' is a ape?

 

Does ape possess volition?

Link to comment
Share on other sites

The founding US law you mean. I can see your point here that the "man" in the Declaration of Independence is limited to the human species; after all, the concept "Artificial Intelligence" doesn't even exist back when that document was written.

 

Right to Life is a moral concept. US's Declaration of Independence is a device that seeks to express and implement this moral concept politically to the best of its creators' abilities. Albeit far ahead of its time and puts the rest of the known world to shame in terms of its effectiveness at implementing Individual Rights politically; that document, any document, is not above the scrutiny of reason.

 

Politics follows Morality, not the other way around. If it's established on moral ground that Volition is the sufficient trait which act as the critical discrimination factor that determines whether or not a consciousness possesses Individual Rights to be recognized, then any political document that can't adequately recognize and implement Individual Rights for such a consciousness in a society is a political document that has room for improvement.

 

...

 

Agreed

 

...

 

Does ape possess volition?

 

"Volition, or will, is the cognitive process by which an individual decides on and commits to a particular task or course of action. As a purposive striving, it is one of the primary human psychological functions along with affection (affect or feeling), motivation (goals and expectations), and cognition (thinking). Volitional processes can be applied consciously or they can be automatized as habits over time." ~ from, Volition as a Key To Artificial General Intelligence

http://www.33rdsquare.com/2013/11/volition-as-key-to-artificial-general.html

 

This article suggests so, albeit a question of degree compared to humans.  But as your AI would likely test several degrees above humans, I believe it opens the door to review.  Anyway I though you might enjoy this source as it relates to your topic.

Edited by Devil's Advocate
Link to comment
Share on other sites

I can then ask you how do humans choose between focusing and un-focusing when the act of thinking only comes after the choice of focusing? This question of yours presupposes that thinking is or have to come before volition.

Yes.  My point being that (precisely as you pointed out above) there can be no such dichotomy.  Since one cannot choose without thinking and since one must choose to think, neither can occur before or after the other.

 

This renders the prospects of a thoughtlessly volitional or a helplessly rational machine nonsensical, to me.

 

I can accept the reality of machine intelligence, but the fundamental issue to be resolved is, is choice programmable?

Yes; that is precisely it.

 

If/then statements don't produce choices, they produce commands, specifically the programmer's commands.

Are we discussing the real morality of the programmer, or the unreal morality of a necessarily deterministic artificial creation?

 

How did you arrive at that conclusion?

Link to comment
Share on other sites

I think that volition is contextual.  Of everything that I do in life; whatever I have stopped to analyze and then decide to do, with its actual consequences in mind, I consider to be a volitional action for me; everything else is accidental.  If I decide to do something which holds unforeseen consequences, I do not consider those consequences intentional because I did not know about them.

 

By this standard, intention boils down to a measure of the correlation between my desires and the ultimate consequences of my actions (just as "truth" is a measure of the correlation between reality and my knowledge of it).

 

Now, before anyone declares that I've reduced free will to nothing more than foresight, please note that in the consequences of these "actions" I do not only mean extrospective action.

 

Hence, the degree to which I consider anything to be chosen is contextual; contextual on my awareness of it, on the extent to which I've integrated it with the rest of my thoughts and feelings, on my awareness of such integration or mis-integration, and so on into infinity.

 

Hence recursive self-awareness.

Link to comment
Share on other sites

As for the claims that if-then statements do not lead to choices: 

 

If one considers "choice" to be the selection of one possible course of action out of many alternatives, according to its relation to one's goals, then Deep Blue was capable of true choice.  Granted, a chess-playing computer does not have much volition (a spider probably has more), but if that is the standard then we've already done it.

 

And this is why I think that innovation (which is a direct result of conceptualization) is more essential to our "humanity" than choice itself is.

---

 

I think that what stands between modern computers and truly sentient machines is the quantification of quantification itself.

Link to comment
Share on other sites

@Harrison (on the subject of volition/reason dichotomy)

 

Yes.  My point being that (precisely as you pointed out above) there can be no such dichotomy.  Since one cannot choose without thinking and since one must choose to think, neither can occur before or after the other.

 

I'm pretty sure one can choose without thinking, there's plenty evidence of that on youtube.

 

Given your previous question "What would I think about? (without use of volition)" I would consider like I wrote that volition is deeply ingrained into the rational faculty and it might be the case that the human version of reason cannot operate without free-will.

 

The opposite however, that the volitional faculty cannot exist without a rational faculty, which I'm assuming is what you are trying to argue, I still don't see how that can be the case.

 

Example: In the case of humans people can and have made irrational choices that completely bypasses the rational faculty, representing a sole act of pure will. I can at this moment choose to bang my head against the nearest wall as hard as possible for no reason at all except because I will it.

 

In the case of AI, while humans only have a rational faculty to chose from, you can imagine a volitional AI been given multiple faculty to choose from by its maker: Rational, Instinctual (pre-programmed).

 

Reason can be inseparable from volition but I believe volition is not inseparable from reason.

Edited by VECT
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...