Jump to content
Objectivism Online Forum

Rights of Artificial Intelligence

Rate this topic


VECT

Recommended Posts

@New Buddha
 
By knowledge I mean relevant knowledge (to the topic of discussion), not total knowledge about everything. For example on this topic of AI rights discussion, if you know something relevant that can have an impact on my previous conclusions that you think I probably missed when I arrived at those conclusions, posting it will force me (assuming I stay rational) to re-evaluate my conclusions and decide whether:
 
-Your new info is relevant and something I've missed
-Your new info is relevant but isn't something I've missed and have already been incorporated in my previous conclusions
-Your new info isn't actually relevant
 
The above process happens to people everyday whenever they are having a rational engagement with someone else.
 
As for people cannot have flawless logic, I completely disagree.
Simplest example would be a person correctly solving a math problem. There's just too many examples to list.
 
As for unsolved disagreements (such as on this forum), it's not a testament of people understanding things differently, its a testament of poor communication and/or eventual apathy.
 
People understanding things differently ultimately means one of them either is working with less relevant knowledge, or is making a logic error. Failure in communication means the failure to either identity or communicate the gap in knowledge, or failure to either identify or communicate the logic error.
 
Two people working with the same amount of relevant knowledge cannot come to different logically sound conclusions. The only way that can happen is if reality itself is not objective.
 
 
@Peter Morris
 
The topic of your previous post have been brought up by no less than 3 different people, all of who I've replied.
 
It's not about what other's opinion is, it's about the rationale behind those opinions. If you have a reason why you think volition cannot be artificially reproduced, post it. If it's enlightening, others myself included will appreciate it. If it's not, then at least you made an effort.
 
Personal opinions without rationale is as meaningless as it is annoying. There's plenty of those on youtube and that is already enough for this one internet.
Edited by VECT
Link to comment
Share on other sites

...

How do you know what goes on in the brain of an ape?  Or in my brain, for that matter?

...

 

I observe how you and Koko behave, taking particular note of those actions that posit self-preservation and aggression towards others; those two elements being significant to recognizing and securing a right to life.

 

...

"Trade" is derivative of "consent" and "sufficient" refers to some purpose, goal or value.  Your entire definition, while accurate, assumes that you are referring to a conscious being.

...

 

Yes, but primarily because volitional actions posit consciousness.

 

...

I believe that this would actually reflect whether the computer was, or was not, conscious.

...

 

OK, but there's a difference between a conscious response, e.g., instinct/programmed, and a volitional response, e.g., innovative.

Link to comment
Share on other sites

...

Also your statement I quoted from your previous post does not assume the AI have volition. The want in the context of your statement pertains to volition, not emotion. Because in the context of your statement, if the AI cannot do what it wants, then it can only follow do what is predetermined, or as you put it, "bound to follow its path either by design or by logic or physics".

...

 

Our agreement highly favors volitional actions and self-sufficiency for recognizing a right to life.  But there remains some fleshing out to determine what behavioral distinction, if any, programming has over instinct in terms of positing volitional action.

 

...

As for emotion, while not relevant to the topic of Rights (unless you wish to propose that emotion is a necessary factor for an entity to possess individual rights), I'll indulge it.

...

 

I do.

 

...

Human emotions are activated by accepted principles of personal values, after activation however the aftermath process is physical and pre-programmed.

 

Like wise for a volitional AI, the maker can pre-program emotional responses. The AI volitionally choose it's own values just as humans do that will activate these emotions. The pre-programmed emotional responses will then react to those values in correct situations, again like humans.

 

For such an AI then, would it not feel?

 

I wonder...  Do pre-programmed emotional responses activiated by the selection of pre-programmed values demonstrate the moral choices of your AI, or its programmer??

Edited by Devil's Advocate
Link to comment
Share on other sites

@VECT
 
"As for unsolved disagreements (such as on this forum), it's not a testament of people understanding things differently, its a testament of poor communication and/or eventual apathy."
 
A. There is not a single field of knowledge (medicine, physics, mathematics, philosophy, chemistry, politics, agriculture, economics, art, education etc.) where there are not fundamental unsolved disagreements between objective, rational people.  Quantum Mechanics is not complete.  Relativity is not complete.  We don't have a clue what Life is.  We don't understand what consciousness is.  The list is innumerable.
 
"People understanding things differently ultimately means one of them either is working with less relevant knowledge, or is making a logic error. Failure in communication means the failure to either identity or communicate the gap in knowledge, or failure to either identify or communicate the logic error."
 
A. People understanding things differently stems from the fact that we lack either a fundamental understand of how things work and/or have personal preferences in solving problems.
 
"Two people working with the same amount of relevant knowledge cannot come to different logically sound conclusions. The only way that can happen is if reality itself is not objective."
 
A.  Knowledge is about solving problems - not just symbolic manipulation - and there are different ways of solving the same problem based upon individual background and preferences.  If two computer programmers are tasked with writing code for a program, they would do so differently.  Or if two architects were to design the same building.  Or two oncologist treating a cancer patient. etc.  Or two cancer patients responding differently to the same treatment by one doctor.  We can send a probe to Mars with Newtonian mechanics.  Does this make it wrong to do so (or impossible) since Einstein has a better (yet incomplete) theory of gravity? 
 
How would your position on knowledge apply to an AI?  Would an AI write the best code or design the best building?  Or would his problem solving just be a reflection of the limitations of his data base and algorithmic programed preferences?
Edited by New Buddha
Link to comment
Share on other sites

I observe how you and Koko behave. . .  <snipped irrelevant criteria>

Exactly.  Let's take it a few steps even further back.

 

I would use "awareness" as an even broader term than consciousness.  While I believe that only human beings are conscious and volitional, I would categorize all vertebrates (except for fish) and a few invertebrates (cephalopods) as being aware. 

A dog is aware, while a tree is not, because the dog can perceive its environment.  Trees sway when the wind blows, but such changes don't seem to indicate awareness.  When a dog wags its tail, however, that does indicate awareness because that action depends on the appearance of something else.

 

This requires some amount of anthropomorphic reasoning because it requires that you imagine whether you would also bend if the wind pushed on you (whether you were aware of it or not) and whether your tail would wag for food if you were not aware of the food.

 

Now from my own memories of my various states-of-awareness, and my memories of my subsequent actions, I notice something universal:  Every action I have taken seemed like the best option at the time.  And it seems likely that this applies to everything else which is aware, as well.

 

So I would say that awareness (of something) can be inferred by drawing analogies between another entity's behavior and my own behavior, and that every creature that is aware will actively pursue its own values (regardless of what it values).

 

Does that terminology, as well as its epistemological basis, make sense to you?

Edited by Harrison Danneskjold
Link to comment
Share on other sites

Two people working with the same amount of relevant knowledge cannot come to different logically sound conclusions. The only way that can happen is if reality itself is not objective.

"Two people working with the same amount of relevant knowledge cannot come to different logically sound conclusions. The only way that can happen is if reality itself is not objective."
 
A.  Knowledge is about solving problems - not just symbolic manipulation - and there are different ways of solving the same problem based upon individual background and preferences.  If two computer programmers are tasked with writing code for a program, they would do so differently.  Or if two architects were to design the same building.  Or two oncologist treating a cancer patient. etc.  Or two cancer patients responding differently to the same treatment by one doctor.  We can send a probe to Mars with Newtonian mechanics.  Does this make it wrong to do so (or impossible) since Einstein has a better (yet incomplete) theory of gravity? 

If one person learns that A is B, and another learns that A is not B, then they cannot both be right (unless reality itself is subjective).

 

If one person thinks that A is the best METHOD of achieving B, and another disagrees, then they can both be correct quite easily; they simply have different definitions of "best" (and are hence actually discussing two different things).  Now, if one person considers A the fastest way to reach B (or the simplest, easiest, most efficient, etc.) then disagreement would once again imply ignorance or evasion.

 

Evaluative terms themselves, however, omit the measurements of "speed vs. efficiency" or "tensile strength vs. weight" and that's why contradictory evaluations can and frequently are both correct; they aren't really contradictions.

 

That is why problem-solving is absolutely not interchangeable with knowledge, as such.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

 

@VECT
 
"As for unsolved disagreements (such as on this forum), it's not a testament of people understanding things differently, its a testament of poor communication and/or eventual apathy."
 
A. There is not a single field of knowledge (medicine, physics, mathematics, philosophy, chemistry, politics, agriculture, economics, art, education etc.) where there are not fundamental unsolved disagreements between objective, rational people.  Quantum Mechanics is not complete.  Relativity is not complete.  We don't have a clue what Life is.  We don't understand what consciousness is.  The list is innumerable.

 

There are fundamental disagreement between rational people regarding the unknown.

 

In certain disciplines (especially such as theoretical physics), certain key facts are not yet observed, so experts make educated guesses about what these key facts might be and build prototype theories base on them. Their colleague of course will have their own idea as to what the unknown might be and build different hypothesis.

 

On theories that can be traced entirely back to observable facts with no element of educated guesses (which is the majority of theories that's operational in everyday life today), what I said stands.

 

"People understanding things differently ultimately means one of them either is working with less relevant knowledge, or is making a logic error. Failure in communication means the failure to either identity or communicate the gap in knowledge, or failure to either identify or communicate the logic error."

 
A. People understanding things differently stems from the fact that we lack either a fundamental understand of how things work and/or have personal preferences in solving problems.

 

Understanding things differently because of a lack of fundamental understanding of how things work pertains to what I said above, making assumptions to fill what is unknown.

 

 

A.  Knowledge is about solving problems - not just symbolic manipulation - and there are different ways of solving the same problem based upon individual background and preferences.  If two computer programmers are tasked with writing code for a program, they would do so differently.  Or if two architects were to design the same building.  Or two oncologist treating a cancer patient. etc.  Or two cancer patients responding differently to the same treatment by one doctor.  We can send a probe to Mars with Newtonian mechanics.  Does this make it wrong to do so (or impossible) since Einstein has a better (yet incomplete) theory of gravity? 

 

If those symbols link back to real-life observable facts in a non-contradictory fashion, then the logic manipulation of these symbols is an act of problem solving; people do this because these symbols are a tool that makes problem solving more efficient and effective (imagine doing calculus with your fingers).

 

Also solution and theory are two different things:

 

-Theory is about understanding an existing process, about seeing the cause & effect relationship. This is objective (assuming no educated guesses are used)

(this is also what we are discussing concerning understanding)

-Solution is about man-made alteration to an existing process and is subjective to the different personal ideas/values/guesses held by the problem solver

(this does not pertain to what we were discussing)

 

All your example here is about solutions, not theories; they do not pertain to understanding.

 

How would your position on knowledge apply to an AI?  Would an AI write the best code or design the best building?  Or would his problem solving just be a reflection of the limitations of his data base and algorithmic programed preferences?

 

Are humans' problem solving just a reflection of the limitations of our genetic instinct and algorithmic programmed preferences by nature and evolution?

 

This question again assumes volition and reason is not artificially reproducible. Care to discuss why?

Edited by VECT
Link to comment
Share on other sites

Exactly.  Let's take it a few steps even further back.

 

I would use "awareness" as an even broader term than consciousness.  While I believe that only human beings are conscious and volitional, I would categorize all vertebrates (except for fish) and a few invertebrates (cephalopods) as being aware. 

A dog is aware, while a tree is not, because the dog can perceive its environment.  Trees sway when the wind blows, but such changes don't seem to indicate awareness.  When a dog wags its tail, however, that does indicate awareness because that action depends on the appearance of something else.

The fish in the lake come for the surface of the water when grass clipping land on it. They scoot away when a hand is submersed in their vicinity.  

 

Some motion detectors can activate a light when a leaf blows through the monitored zone. Photo-electric cells can activate lights at dusk. Timers can turn them off after a programmable period of time. A rheostat activates and deactivates HVAC units.

 

Add these devices to a mobile unit that goes to the HVAC unit to switch it on when a built in rheostat hits a given temperature and turn it off at a different target temperature. Add photo-electric cells that activate the subroutine to flip on a light at dusk, and turn it off after a given period of time. Add a motion detector that activates an appropriate subroutine. Add cameras that can detect with a compact radar for detecting object to help navigate to the light switches. These would work together to give the appearance of awareness. Passing the Turing test demonstrates the ability to fool given individuals, it is still complex software programmed to do just that. Mimicking activities humans associate with awareness does not confer awareness to the device performing the mimicking.

Link to comment
Share on other sites

Evaluative terms themselves, however, omit the measurements of "speed vs. efficiency" or "tensile strength vs. weight" and that's why contradictory evaluations can and frequently are both correct; they aren't really contradictions.

That is why problem-solving is absolutely not interchangeable with knowledge, as such.

Good point. I might add that decisions are often made in a manner consistent with hypothesis tests. In this case, the decision one makes may depend on the confidence level one chooses. In this case, two perfectly rational people may make different decisions. Instead of logic, we often resort, by necessity, to confidence.

Link to comment
Share on other sites

Mimicking activities humans associate with awareness does not confer awareness to the device performing the mimicking.

Then by what method do you infer my awareness?  If you think that I could not conceptualize (as evidenced by the words on your screen) without consciousness then, frankly, I agree- and that's why the Turing test appeals to me.

 

If it is valid to infer that these symbols are evidence of another conscious mind (which you are doing at this very moment), then for what reason could the Turing test be invalid?  What makes it different?

---

 

I suspect that I'm not the only one who makes such inferences.  If you have an alternative then I look forward to reading about it.

Link to comment
Share on other sites

Then by what method do you infer my awareness?  ...

 

If it is valid to infer that these symbols are evidence of another conscious mind (which you are doing at this very moment), then for what reason could the Turing test be invalid?  What makes it different?

...

 

Trust but verify.  One may infer intelligent interaction over the internet, however proof requires identifying the IP address for the location of correspondence and observing an actual person corrosponding during times when the transmission is know to be open and active.

Link to comment
Share on other sites

Then by what method do you infer my awareness?  If you think that I could not conceptualize (as evidenced by the words on your screen) without consciousness then, frankly, I agree- and that's why the Turing test appeals to me.

 

If it is valid to infer that these symbols are evidence of another conscious mind (which you are doing at this very moment), then for what reason could the Turing test be invalid?  What makes it different?

---

 

I suspect that I'm not the only one who makes such inferences.  If you have an alternative then I look forward to reading about it.

From the stand point of you, Harrison, it is an inference built over time integrating conversations observed here in conjunction with some knowledge of how effective the test is. Again, the Turing test demonstrates that the human intellect can be fooled with some of the more advanced protocols in place. Keep in mind, primitive folk inferred spirit in the planets behind the movement. Is a mistaken inference of consciousness a valid inference?

 

I don't have a bullet-proof alternative now, but an appeal to ignorance is not considered a valid approach to expanding ones knowledge.

 

A few side thoughts though, there are other similarities we integrate with our knowledge of animals. They too bleed, breath, die, heal, eat, etc., which is used along with observations of animation to make such an inference.

Link to comment
Share on other sites

From the stand point of you, Harrison, it is an inference built over time integrating conversations observed here in conjunction with some knowledge of how effective the test is. Again, the Turing test demonstrates that the human intellect can be fooled with some of the more advanced protocols in place. 

So. . .  A wide range of observations, built up over a long period of time and taken with a grain of salt?  I agree that would be the best way to figure it out, using the sort of reasoning that I keep referring to.

 

Keep in mind, primitive folk inferred spirit in the planets behind the movement. Is a mistaken inference of consciousness a valid inference?

Valid inferences can still be mistaken; it depends on the evidence available (and astronomical evidence hasn't been particularly accessible throughout most of human history, as you very well know).

 

They too bleed, breath, die, heal, eat, etc., which is used along with observations of animation to make such an inference.

Eating is purposeful action and, as such, I agree; that's an important indicator of awareness.

 

I don't have a bullet-proof alternative now, but an appeal to ignorance is not considered a valid approach to expanding ones knowledge.

It's not an appeal to ignorance.

 

This "anthropomorphic" reasoning I've advocated is a direct extension of the introspective analogies I've mentioned before, frequently; the sort of inference which I believe to be the very basis of all social reasoning, which also makes it the basis of communication.

If I'm right then you must be using exactly that reasoning in order to convey that it's invalid (which makes it a contradictory expression).

 

I realize that this reasoning could very well lead to the attribution of "awareness" to a motion detector, in some miniscule sense, but I think that to thusly attribute "consciousness" to any nonhuman thing today would require a little bit of context-dropping.  So I suppose my question amounts to what else you see wrong with it.

 

Is that the extent of your objections?

 

. . .  observing an actual person corrosponding during times when the transmission is know to be open and active.

Okay, DA, at this point you've defined "a conscious being" as "a homo sapiens being" and so you're absolutely right.

 

 

 

By the way you have now defined it, nothing artificial could ever become conscious.  Mystery solved.  :thumbsup:  :thumbsup: :thumbsup:  

Edited by Harrison Danneskjold
Link to comment
Share on other sites

....

 

By the way you have now defined it, nothing artificial could ever become conscious.  Mystery solved.  :thumbsup:  :thumbsup: :thumbsup:  

 

Well, if the IP address leads you to a black box you'd know it wasn't human.

 

In the movie, Blade Runner, Harrison Ford's character uses a kind of Turing test (called the Voight-Kampff Test

) to identify AIs in human form, so there may be some merit to a future test being used to identify a human consciousness.  But if the right to life is only relevant to human life forms, then AIs need not apply.  Isn't it more likely that VECT's AI would be granted some moral equivalent of protection of a endangered non-human species?

 

After all, there is only one...

Edited by Devil's Advocate
Link to comment
Share on other sites

This "anthropomorphic" reasoning I've advocated is a direct extension of the introspective analogies I've mentioned before, frequently; the sort of inference which I believe to be the very basis of all social reasoning, which also makes it the basis of communication.

If I'm right then you must be using exactly that reasoning in order to convey that it's invalid (which makes it a contradictory expression).

 

I realize that this reasoning could very well lead to the attribution of "awareness" to a motion detector, in some miniscule sense, but I think that to thusly attribute "consciousness" to any nonhuman thing today would require a little bit of context-dropping.  So I suppose my question amounts to what else you see wrong with it.

 

Is that the extent of your objections?

That's a pretty big "If I'm right".  I don't know how to effectively contrast logical inference with inference based on social reasoning (only individuals reason) other than to state I don't concur. I think isolating eating from the other similarities observed in other non-human life forms which I think cumulatively point to the conclusion that many non-humans have a form of perceptual awareness is a little bit of context dropping.

 

As for now, that is pretty much the extent of my objection on this.

Link to comment
Share on other sites

I don't know how to effectively contrast logical inference with inference based on social reasoning (only individuals reason) other than to state I don't concur.

Okay; this is what I mean about "social reasoning" (and you can find more about it at http://forum.objectivismonline.com/index.php?showtopic=27145&page=4):

When you see another person drink some water, you immediately and automatically infer that they must have been thirsty.  HOW?  My personal hypothesis is that it runs something like:

 

P:  They are drinking water.

p:  I only drink water when I am thirsty.

C:  They must have been thirsty.

 

Granted, there's a lot more between those premises that I've left implicit; it's currently a non-sequitor; but that's the general form that I think it must take (hence an introspective analogy).  That's all there is to it.  I'm not referring to any sort of collective consciousness, nor to anything distinct from logical inference.

I think isolating eating from the other similarities observed in other non-human life forms which I think cumulatively point to the conclusion that many non-humans have a form of perceptual awareness is a little bit of context dropping.

Plants also bleed and heal, in their own ways.  "Death" is only the negation of "life" which either means what plants do (which would be irrelevant) or what animals do (which would make it logically derivative of "awareness" in the first place).

Rather than spewing out a list of all of the things I thought you were wrong about, I responded to the one thing that I agreed with.  Sorry.

 

That's a pretty big "If I'm right".

Yes.  It also comes with a rather large "if I'm wrong" which I am painfully aware of.

 

I look forward to hearing anything else you notice about it.

Link to comment
Share on other sites

Consider what's available on the Turing test. Essentially, there is one, albeit complex, area of measure - the ability of a program to replicate the human ability to exchange symbols.

 

Humans eat. Animals eat. While plants get nutrition, it is not available via direct perception.

Humans drink water, Animals drink water. The house plants here perk up after adding water when the leaves are visibly drooping.

Humans alter course to go around an obstacle, Animals do likewise. Plants, esp. trees, make a good obstacle.

Humans and animals navigate their environments in search of food. Plants, again, via direct perception, do not.

Humans and animals will move out of the way when something comes at them.

Humans and animals look toward a sound to determine the source.

Humans, animals and plants can be injured and over time heal.

 

Study of anatomical structure reveals similarities and differences in humans animals and plants.

 

These factors considered together and more could be added. Avoiding obstacles, finding food and water, moving in the environment are all linked to perceptual awareness. Plants, while alive, do not demonstrate these activities. Turning toward light, recent studies indicate, may have something to do with the photosynthesis and where the cellular structure is built in relation to the light which causes growth toward the light.

 

Anatomical similarities along the lines of muscles, circulation systems, hearts, brains, nervous systems, eyes, ears, nose, mouth, sensitivity in the skin, are parts of livings beings that differ quite markedly from plants.

 

It is the weight of combined factors that give rise to the logical inferences that many animals, other than humans, are perceptually aware, over and above classification as a living organism.

It is the similarity across multiple metrics that lend weight to the conclusion.

It is the absence across multiple metrics in plants, most notably sense organs, internal organs, lack of arms and legs, head, that fail to support a similar conclusion.

 

How might this weigh into the Turing factor?

Based along the line of thinking of the quotation earlier from Dr. Binswanger - in the case of AI, zeroing in on the latest addition to the evolutionary chain, the ability to think. setting aside all of the factors that make thinking possible, man devises a computer - notes it can be manufactured and programmed to produce symbols that humans are capable of recognizing, and then wonder if this invention understands the symbols as representative of human ideas that made the device possible. Much of the confusion seems to arise from how the question(s) are couched.

To derive such a conclusion it would need to be  widely integrated without contradiction in accordance with the laws of logic with the rest of our knowledge, not along the one measure of can a human be fooled by evaluating only the output of a Turing program dedicated solely to that end.

Link to comment
Share on other sites

Essentially, there is one, albeit complex, area of measure - the ability of a program to replicate the human ability to exchange symbols.

For someone, who has the epistemological understanding which you do, to refer to human speech as symbolic "exchange" and symbolic "production" --- to me, these do not seem as thorough as your analyses usually are.

 

Yes, the only area of measure is a program's ability to exchange symbols, but isn't that a direct result of conceptualization for every conscious entity known to man?  And isn't our concept-formation more important, in the context of what makes us human, than our intestines? 

Isn't that any bit more relevant to "consciousness", at all?

 

It is the absence across multiple metrics in plants, most notably sense organs, internal organs, lack of arms and legs, head, that fail to support a similar conclusion.

I absolutely do not believe that this represents your best effort.  And please bear in mind, as you read this, that I say it out of respect; if I thought otherwise then I simply would not bother.

 

I think that of your response because, for one thing, Barbie dolls have eyes, ears, arms, legs and a head.  And of internal organs, what child (because every child somehow learns to spot this distinction)- where is this child who has learned the difference between perceptive and senseless entities by first observing their internal organs?

I know you specified, perfectly clearly, that each piece of evidence was only a contributing factor.  I still cannot take them seriously. 

 

It makes as much sense to me as predicting the motion of Jupiter according to any list of any number of socioeconomic factors.

 

 man devises a computer - notes it can be manufactured and programmed to produce symbols that humans are capable of recognizing, and then wonder if this invention understands the symbols as representative of human ideas that made the device possible.

That is true.  If a computer screen displays "hello" it can be easy to forget that it is only displaying a perfectly mindless arrangement of switches and charges, which happen to produce a word, instead of an actual greeting.

 

It is also true that parrots can "produce symbols," sometimes in long and sophisticated patterns, without the slightest shred of a concept.

 

 

 

And I would like to know if that is truly what you think I've been talking about.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

My opinion on the Turing Test is that it's not a valid test for a true AI (volition/reason..etc.)

 

The reason is because this test only cares about the end result, not the means that produced those results.

This test also measures the end result based on subjective human opinions.

 

As processing power of computers increase, it's very plausible in the foreseeable future that an AI built strictly from an efficient set of algorithm commanding a large database can pass a test of fooling a few humans easily. But such an AI would still just be a machine.

 

The only way to produce a true volitional AI is first understand the principles that produce volition in humans. Pursuit of this knowledge and its replication will be what ultimately produce real artificial life.

 

Taking it one step further, understanding the exact mechanics and relation between reason and emotion (among other things) in respect to volition, and reproducing that relation in a voltional AI will be the step to making a real artificial human conciousness.

Edited by VECT
Link to comment
Share on other sites

For someone, who has the epistemological understanding which you do, to refer to human speech as symbolic "exchange" and symbolic "production" --- to me, these do not seem as thorough as your analyses usually are.

 

Yes, the only area of measure is a program's ability to exchange symbols, but isn't that a direct result of conceptualization for every conscious entity known to man?  And isn't our concept-formation more important, in the context of what makes us human, than our intestines? 

Isn't that any bit more relevant to "consciousness", at all?

A computer is a bunch of switches, capacitors and electrical transfer devices, brought into existence by conceptual consciousness. As such, a computer turns pixels on a screen different colors according to which switches are switched which way. It is the mind that observes the pixels on the screen that gives them any meaning, It is as observers, we see digital photography, written text, etc. 

 

I absolutely do not believe that this represents your best effort.  And please bear in mind, as you read this, that I say it out of respect; if I thought otherwise then I simply would not bother.

 

I think that of your response because, for one thing, Barbie dolls have eyes, ears, arms, legs and a head.  And of internal organs, what child (because every child somehow learns to spot this distinction)- where is this child who has learned the difference between perceptive and senseless entities by first observing their internal organs?

I know you specified, perfectly clearly, that each piece of evidence was only a contributing factor.  I still cannot take them seriously. 

 

It makes as much sense to me as predicting the motion of Jupiter according to any list of any number of socioeconomic factors.

The first section, before the line break, I tried to keep to the perceptual level. After that, it is examples of where more intensive knowledge can be integrated, without contraction to the earlier observations.

 

Barbie doesn't quite cut it. True enough, the molded plastic resembles arms, legs etc. Barbie is set on shelf. You return a year later, barring earthquakes, shelf falling off the wall, etc., and Barbie is still in the place last set. Hmm. No hunger? No thirst? No seeking food? No exploring environment? (all closely associated with perceptual awareness). Clap your hands together in front of Barbie's apparent eyes. Unless the doll was designed to blink, did it? Do this in front of Fido or Fifi. Did Fido or Fifi blink? Push the large Tonka truck toward Barbie. Did Barbie move to get out of the way, or just move when the momentum of the truck carried it through the location where Barbie was set. Push the large Tonka truck toward Fido or Fifi. Observe the difference.

 

This is where the multiple metrics come into play. First level concepts are much easier to point to and identify "Here is another example of a table". Perceptual awareness is an attribute of some living organisms. Determining which ones are and are not, your knowledge of it has been more or less automatized over the years. Trying to apply it fresh to a new concrete involves carefully breaking down into what constitutes evidence for, and what does not.

 

That is true.  If a computer screen displays "hello" it can be easy to forget that it is only displaying a perfectly mindless arrangement of switches and charges, which happen to produce a word, instead of an actual greeting.

 

It is also true that parrots can "produce symbols," sometimes in long and sophisticated patterns, without the slightest shred of a concept.

 

 

 

And I would like to know if that is truly what you think I've been talking about.

It appeared you were talking about inferring a form of awareness to an AI entity that might be engaged on the other end of a Turning test.

 

* * * * *

Another thought on "rights" with regard to AI - property rights would apply. Without regard to the issue of "self-"defense, the AI, if it were a sophisticated robot, presumably could be turned on or off without damaging the unit. It belongs to who ever built or traded for it. Another human being does not have the right to destroy property which does not belong to themselves.

The question is mute within the sphere of rights as developed in Objectivist literature, although within the scope of science fiction, it might be instructive to explore the ramifications of applying it literarily.

Link to comment
Share on other sites

It appeared you were talking about inferring a form of awareness to an AI entity that might be engaged on the other end of a Turning test.

I have been, because I think that any computer's output, if identical to a human being's "output" within the full cognitive context available to me, must function identically (in this case, it must be conscious).

I should furthermore specify that while novels and manifestoes are only long series of symbols, I do not consider any "symbol production" whatsoever to indicate consciousness, any more than any arbitrary string of symbols would constitute a novel.

I had left that implicit, on the assumption that it did not need to be explicitly mentioned.  I can elaborate if you would like.

 

I can actually summarize all of my reasoning behind this, essentially, as:  if it quacks then I call it a duck.

 

Perceptual awareness is an attribute of some living organisms.

Yes. . .

If you mean that perception is essential to consciousness then you are right; a true AI would have to possess a perceptual capacity, which is why I suspect that it would have to be embodied.  If you believe that a perceptual capacity cannot be programmed, or anything along such lines, then I would be delighted to explore why.

 

If you mean that biological stuff is essential to consciousness (as Devil's Advocate does) then we are not discussing the same thing, or at least not in the same sense.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

I have been.  However, it occurs to me that I should elaborate slightly.  I don't give a damn whether or not any judge or pool of judges can distinguish between the output of a human being, and that of some algorithm.  When I talk about a computer which appears to behave in exactly the same ways that the human mind does (or passes the Turing test), part of that statement implies an observer, by which I mean myself.

 

And if I could not distinguish it from true consciousness, with the application of the full cognitive context available to me, then its literal consciousness seems logically necessitated.  To say "yes, it really seems to be thinking, and I can't find any evidence that it isn't really thinking, and it's doing all the things that I usually interpret as evidence of thinking" and then conclude that it must be a very sophisticated illusion, is not compatible with my current understanding of a proper epistemology.

That latter paragraphs amounts to being fooled by a sophisticated program, in the event that is all it turns out to be.

 

Yes. . .

If you mean that perception is essential to consciousness then you are right; a true AI would have to possess a perceptual capacity, which is why I suspect that it would have to be embodied.  If you believe that a perceptual capacity cannot be programmed, or anything along such lines, then I would be delighted to explore why.

 

If you mean that biological stuff is essential to consciousness (as Devil's Advocate does) then we are not discussing the same thing, or at least not in the same sense.

If perceptual capacity (i.e., awareness) can be programmed, I think it would be incumbent by those who would assert it to demonstrate it. In the event that man unlocks the ability to create life, would it be limited to biological matter? That bridge has yet to be built.

Link to comment
Share on other sites

If perceptual capacity (i.e., awareness) can be programmed, I think it would be incumbent by those who would assert it to demonstrate it.

You are absolutely right.  Thank you.  http://en.wikipedia.org/wiki/Computer_vision

 

That latter paragraphs amounts to being fooled by a sophisticated program, in the event that is all it turns out to be.

As processing power of computers increase, it's very plausible in the foreseeable future that an AI built strictly from an efficient set of algorithm commanding a large database can pass a test of fooling a few humans easily. But such an AI would still just be a machine.

'If you infer X from Y then you will be wrong, if Y is ultimately false'.  Um. . .  Yes?  The only sense I can make of that statement is that it's meant to imply "you will simply be wrong," along the same lines as "computers will appear to be conscious soon, but won't really be"- except that I can't understand that, either.

 

Whatever it is that connects those dots for you guys, it's not in my vocabulary.  And I honestly don't think it's even worth learning about.

 

I'm sorry if I've wasted your time.

Link to comment
Share on other sites

That latter paragraphs amounts to being fooled by a sophisticated program, in the event that is all it turns out to be.

The Turing test isn't a party trick, to fool someone with. If you can't distinguish a computer from a human, when interacting with them, that's not "being fooled". That's the computer passing the test of intelligent consciousness. The only test available.

 

If passing that test doesn't indicate an intelligent consciousness, then humans aren't conscious and intelligent either, since that's the only proof we have of a human being conscious and intelligent as well.

 

What you're saying here is that you're not willing to hold a computer to the same standard you hold a biological entity, when testing for a property they may or may not share.

Edited by Nicky
Link to comment
Share on other sites

The mention of party trick falls short of what may well be a philosophic trick. Turing allots 5 minutes, if wiki is accurate on this aspect. It is a blind test - you don't actually see the other participant in the conversation. What the human being is assessing is the capability of the human programmers and systems developers to provide an experience commensurate with dealing with a intelligent being via remote connection. The bait and switch is attributing the success to the computer rather than the programmers and systems developers.

 

Proof is a step by step process of establishing the relationship of all the available evidence leading to a conclusion. If the Turing test is the only test available for establishing intelligent consciousness for both human and computers alike, then the conclusion follows accordingly. This does make me wonder how humans were determined to be either intelligent or conscious prior to the development of the Turing test in 1950.

 

As to the standard, is it possible to submit two principles for consideration as guidelines in the matter? The first I'd like to propose is: Existence is identity. For the second, I'm open to: Consciousness is identification.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...