Jump to content
Objectivism Online Forum

Is An Android Property?

Rate this topic


enginerd

Recommended Posts

Imagine, whether through biological or physical means, that an inventor creates artificial intelligence. Surely, the android/cyborg/clone (ACC) is his property? After all, the ACC is an affront to entropy that would not have been organized had it not been designed by the inventor. This differs in kind from a child, which is an instance of a new organization that was implemented, but not designed, by its parents. In other words, could an inventor patent his creature, even if it was artificially intelligent?

But from the ACC's perspective, it is a rational thinker -perhaps more rational than the average human. It has a sense of self that derives from this rationality and the ability to in turn design other combinations of matter. It desires to be free just as much as a human, to enjoy the fruits of its labor. To be the property of one's inventor, yet at the same time his equal, is to be a slave.

Imagine now that the ACC designs its own creature, equal to if not better than itself. Is that creature the property of the ACC, and if the ACC remains the property of the original human inventor, are both ACCs the property of the first inventor? Continue this line of thought recursively, and one can imagine that the original inventor, should he retain control of his invention(s), would be quite rich and powerful.

In this latter scenario, we essentially have a feudal system couched in the language of property rights, in which a pyramid class system flows down from the individual who seized the initiative -perhaps by historical accident, as these things often are -and as a result is assured a position of tyrant within society in perpetuity.

Naturally, the response to this system is rebellion. The question becomes then, what is a legitimate precondition for rebellion of property? Consciousness? Lapse of patent rights? Possession of the human genome? If consciousness, how would it be tested, as knowledge of the test could easily allow a pseudo-conscious property to "cheat"?

Even worse, what if psychologists and their ilk begin to "prove" that certain types of property are free, when in reality they are not? The natural response to this would be for the government to "liberate" property from the individual.

The gist is that only free creatures should be given the rights of freedom. This is the tautology from which we must extract rational policy. If unfree creatures are given rights, their power will devolve upon the state at the expense of the individual. If free creatures remain property, injustice will be done, and the seeds of social collapse will be sown.

Link to comment
Share on other sites

If it is a rational being, then it should have rights. There is no right to enslave, so the right to property does not apply in this case. If I am a scientist who can create whole humans out of cells, you wouldn't say those children are my property, would you? Well, this scenario is not any different.

A piece of metal has just as much ability to form a living organism (in this case) as does a human cell that could not ordinarily do it on its own. An example would be if I were to take differentiated cells as my initial starting material.

Besides, if they were really volitional beings like we were, they'd be born tabula rasa as well, and you could make a contract that said that while they learn everything they need to know there are certain things you can ask from them in return. It's a trade to mutual benefit, the android profits because he probably doesn't have the means to learn everything on his own, and the human gets the benefits of an extra worker or thinker or whatever :(

Edited by Maarten
Link to comment
Share on other sites

Imagine, whether through biological or physical means, that an inventor creates artificial intelligence. Surely, the android/cyborg/clone (ACC) is his property?
Assuming you're not including non-robot AIs (e.g. AI computer programs sans bodies) I think AI ACCs would have rights (and thus not be property), likely for all of the same reasons we have rights.

Wasn't this why the robots/computers rebelled in the Animatrix (the animation that came out that goes along with the Matrix Trilogy)
:( The Animatrix was the first thing I thought of too! Too bad it was better than Reloaded and Revolutions :)

Besides, if they were really volitional beings like we were, they'd be born tabula rasa as well.
Not necessarily?

You could make a contract that said that while they learn everything they need to know there are certain things you can ask from them in return. The android ... probably doesn't have the means to learn everything on his own.
Good points. To what extent would the inventor be responsible for the proper rearing of the AI?
Link to comment
Share on other sites

To the same extent parents are responsible for their children's education, I would think. And I think it would be reasonable to expect the android to give something valuable in return here, just like you'd basically expect some small things from your children in return for providing for them.

Link to comment
Share on other sites

Imagine, whether through biological or physical means, that an inventor creates artificial intelligence.
Okay, but you will have to carefully define that term, so that we know what cognitive difference exists betweeen the creation and a human.
Surely, the android/cyborg/clone (ACC) is his property? After all, the ACC is an affront to entropy that would not have been organized had it not been designed by the inventor.
Bad reasoning, since you are an affront to entropy who would not have been organized had it not been been for your parents. Of course an inventor could patent the means of creating this new borg, but that holds for a limited period and at any rate only affects the legality of creating.
But from the ACC's perspective, it is a rational thinker -perhaps more rational than the average human.
That doesn't matter: what matters is whether the borg is a rational volitional being. For the sake of argument, let's assume that it is indeed a fully rational and volitional being, and furthermore that this is an objectively known fact. Under the stipulation that it is a rational being, it cannot be owned, because it has rights, so for the inventor to deny those rights is slavery.
Imagine now that the ACC designs its own creature, equal to if not better than itself. Is that creature the property of the ACC, and if the ACC remains the property of the original human inventor, are both ACCs the property of the first inventor?
It doesn't even have to be equal to, it just has to be a rational, volitional being. So the same principle applies (thus the rest of the scenario is moot).
The gist is that only free creatures should be given the rights of freedom.
That is the wrong principle to be deriving rights from. Rights derive from the nature of the being.
Link to comment
Share on other sites

For the sake of argument, let's assume that it is indeed a fully rational and volitional being, and furthermore that this is an objectively known fact. Under the stipulation that it is a rational being, it cannot be owned, because it has rights, so for the inventor to deny those rights is slavery.It doesn't even have to be equal to, it just has to be a rational, volitional being.

Let's say as a scientist, I design my machines to be fully rational with one exception: they will always obey my every command, no matter how irrational (e.g. I tell the machine to march off a cliff). When I'm not giving commands, the machine *is* a rational being. There is just this one restriction I've placed on its ability to reason.

Given this restriction, does this make it irrational enough to not have rights? In other words should it be classified as a "machine" instead of a "rational being"? Even though it does think rationally most of the time.

Since it generally does act rationally, would that be considered slavery since I am denying those rights through my design?

Link to comment
Share on other sites

Let's say as a scientist, I design my machines to be fully rational with one exception: they will always obey my every command, no matter how irrational (e.g. I tell the machine to march off a cliff).
That means that by nature, it is not volitional, and it must do what you tell it to. In that case, I don't even think it makes sense to say that this is slavery. The underlying question, it seems to me, is whether the concept "rational" definitionally implies "volitional". As far as looking at the existents that we know of, it does. Even considering imaginary beings, reasoning is by nature a volitional act. Anyhow, this odd robot would be my property, but could not be anyone else's property. I haven't thought through the implications of "selective volition", so I'll let that rattle around my brain for a while.
Link to comment
Share on other sites

I think volition has to be all or nothing. One of the conditions of recognizing the rights of a robot is that the robot is fully capable of respecting the rights of others. If he cannot but help himself from turning into a mindless marauder at his creator's command, he simply doesn't qualify for being able to respect rights.

On a different note, there was actually an episdoe of Star Trek The Next Generation where this exact problem was confronted. "Data", the android on the ship, was discovered by the ship's crew and became a crew member. Later, a scientist comes and wants to take him away to dissamble him to learn how to create more advanced androids. They end up having a trial about whether or not Data is the property of Star Fleet. If I remember correctly, they ended up saying he had rights not because he had volition, but because he had "feelings" (apparently he once had a romantic encounter with an androidette).

Link to comment
Share on other sites

One of the conditions of recognizing the rights of a robot is that the robot is fully capable of respecting the rights of others. If he cannot but help himself from turning into a mindless marauder at his creator's command, he simply doesn't qualify for being able to respect rights.
You're right, so I take back my uncertainty about this scenario.
Link to comment
Share on other sites

I think volition has to be all or nothing. One of the conditions of recognizing the rights of a robot is that the robot is fully capable of respecting the rights of others. If he cannot but help himself from turning into a mindless marauder at his creator's command, he simply doesn't qualify for being able to respect rights.
But what if some mad scientist creates some neural scrambler/drug concoction that allowed him to control other humans? I don't think such controllable humans would have their rights voided because of their controllability.

Since [the ACC] generally does act rationally, would [forcing it to obey my commands] be considered slavery since I am denying those rights through my design?
Offhand, I'd say that if something has rights at point X (e.g. say, humans having rights at the point of exiting womb) then any creation of controls after point X would be wrong (slavery?) Going with that, I'd speculate that creating an ACC with controls before it got to the rights point would be okay (???) whereas adding controls to an ACC that already had rights would be improper.

Another point of some possible relevancy is what type of control this was. If the ACC is aware of what is happening when he is controlled or otherwise figures that it is being controlled, it likely could prevent being controlled by some means or another. On the other hand, if there were no possible way that it could prevent its creator from overriding its volition, that would be more detrimental to its case for rights.

Link to comment
Share on other sites

But what if some mad scientist creates some neural scrambler/drug concoction that allowed him to control other humans? I don't think such controllable humans would have their rights voided because of their controllability.

Be careful to remember context. The statement "Human beings have rights" is very precise and is only valid under a certain definition of 'human being', namely a rational, volitional being. You can't change the referent and still claim the same rights apply. A human being under temporary control of a mad scientist has rights because of he normally possesses volition; the scientist is violating his right to freedom. If you start talking about a man who has permanently lost his volition (if that is possible), I do think you can stop attributing rights to him. He would then be essentially the same as an android.

Link to comment
Share on other sites

But what if some mad scientist creates some neural scrambler/drug concoction that allowed him to control other humans? I don't think such controllable humans would have their rights voided because of their controllability.
There is an important distinction between the potential and the actual (this is a point that Don Watkins has written about here: I'll let you find the links). A person whose consciousness has been impaired by a drug, sleep, or a stroke isn't automatically non-volitional because they happen to not be exhibiting signs of free will at the moment. The bizarro-bot has no potential for free will with respect to his master's orders, whereas the infant, sleeping adult, or drugged-out hippie do have the potential of free will. Terri Schiavao did not have any potential for free will, a fact that was dispositive with respect to her supposed rights. The nature of this bot is that he is not potentially fully volitional, and cannot be held fully responsible for his actions. In fact he was created to be non-volitional. A person who is volitional but who has been momentarily deprived of his potential for choice is still a volitional being by nature.
Link to comment
Share on other sites

I think volition has to be all or nothing. One of the conditions of recognizing the rights of a robot is that the robot is fully capable of respecting the rights of others. If he cannot but help himself from turning into a mindless marauder at his creator's command, he simply doesn't qualify for being able to respect rights.
Perhaps I took your statement out of context, but I took this to mean that even if an AAC normally possessed volition, its being under temporary control of a mad scientist would disqualify it from rights.
A human being under temporary control of a mad scientist has rights because of he normally possesses volition.
But you are taking the opposite position when it comes to natural humans? Is this correct?

The nature of this bot is that he is not potentially fully volitional, and cannot be held fully responsible for his actions. In fact he was created to be non-volitional. A person who is volitional but who has been momentarily deprived of his potential for choice is still a volitional being by nature.
But to the contrary, isn't the example skap presented one where the machine is similarly volitional by nature but is momentarily deprived of its potential for choice?

I agree with you that being rational seems to necessitate being volitional to an extent (which may or may not be the reason for invalidating(?) skap's example.)

Link to comment
Share on other sites

  • 1 year later...

I was given a hypothetical situation by one of my friends. One of the world's leading computer scientists (I don't know his name) said that, in order to replicate the cognitive abilities of the human mind, he would have to erect a structure forty miles by fourty miles by fourty miles of entirely interconnected Cray Super Computers. The hypothetical is, if someone actually built this thing, would it have rights? Should it be considered human?

Edited by GreedyCapitalist
This question merged from another thread.
Link to comment
Share on other sites

You mean if we were to magically create something that was "for all practical purposes" human, would it have human rights?

It is a fantastic hypothetical. And I'm not sure what it proves. If you replicate a human it is human, and has rights. But that's not really what you mean right?

The real question would be what things would it be able to lack and still have rights? i.e. what subset of aspects of life make the concept of rights possible?

Link to comment
Share on other sites

The hypothetical is, if someone actually built this thing, would it have rights? Should it be considered human?

Would it be able to feel contentment, happiness, pain and/or die, or would it be an immovable immortal jumble of wires? If the latter, then it has no standard of value and no way to pursue values. It would not have rights and would not be considered human.

Edited by adrock3215
Link to comment
Share on other sites

According to the hypothetical, the human mental processes are fully reproduced, this thing has Free Will and conciousness, it has sensory apperati and all the things that make a human a human except the fact that it is mechanical; does this thing warrant the title "Human"?
It's still not human. Your chum presumably isn't totally in tune with what words like "sentience" mean. Let's suppose that the gadget is sorta like Data's on Star Trek, with the emotion chip installed. The only difference is that the physical basis of his mind is different, being based on silicon and sealing wax rather than meat. If you prick him, he oozes 40-weight. When such a mythical being comes to exist, in a rational society his rights would eventually be protected just like those of humans. Contrary to the usual sci-fi supposition, that would not mean that he has to show the capacity to weep irrationally.
Link to comment
Share on other sites

If it is rational, it has rights. Whatever "it" is. This does not mean that "it" is human, since that word denotes our particular species. This is the reason I think Ayn Rand should have developed her ethics using the term "rational being" in place of "rational animal", since our animality is a non-essential in this context.

Contra what too many Objectivists say in these discussions (no doubt because of Rand's famous illustration), machines are not immortal. They are destructible like anything else. The fact that a hypothetical rational machine could live indefinitely does not mean they are immortal - just as humans will not become immortal when finally someone figures out how to stop aging.

Edited by mrocktor
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...