Jump to content
Objectivism Online Forum

The "What" of the Concept Consciousness

Rate this topic


Recommended Posts

Oh I meant to say that in general, you sound relatively correct for once. =P

 

That is, to differentiate, we need to observe non-conscious things and other conscious things. When the concept is formed, the definition and word are necessarily part of the concept, with many other aspects that are non-essential. Indeed third-person consciousness, seeing others as conscious, doesn't conflict with first person consciousness. But we mustn't forget that cause is not equal to effect; the cause of consciousness is not the same as the effect, which is consciousness, the first-person point of view or faculty of awareness to use Rand's terminology. Perhaps it sounds minor, but I am absolutely saying consciousness is only a first-person experience. SL began with the opposite claim.

 

Rand's term is "faculty of awareness". This is an existent, an attribute of reality, something I can point to which things either possess or not.  I think Rand's term is accurate.

 

I believe you are saying that the concept "consciousness" does not include the faculty of awareness but refers only to "awareness" as such.  My claim as to the concept consciousness is not so much opposite of yours, since it encompasses yours and more.   as I said earlier I think the concept "consciousness" encompasses both the faculty of awareness and is a valid term for when we refer to that awareness as such.

 

Rand is right on this one.

Edited by StrictlyLogical
Link to comment
Share on other sites

Rand is right on this one.

Right about what? A faculty of awareness means it is referring to one's being aware - it's the ability of awareness. Nothing else really. If you claim that it's "more" than that, and want to show that you're correct, then I need a stronger argument. I think "adding" more leads to all the errors of functionalism and property dualism.

Edited by Eiuol
Link to comment
Share on other sites

I believe you are saying that the concept "consciousness" does not include the faculty of awareness but refers only to "awareness" as such.  My claim as to the concept consciousness is not so much opposite of yours, since it encompasses yours and more.   as I said earlier I think the concept "consciousness" encompasses both the faculty of awareness and is a valid term for when we refer to that awareness as such.

In the beginning we have an empty mind - tabula rasa, as Rand said. You fill it with experiences. Your awareness develops from your understanding of experiences of others. So, to refer to awareness of others is the process of developing one's own. It is additional in the sense only as being parallel in development. Yes, it encompasses both.

Link to comment
Share on other sites

Right about what? A faculty of awareness means it is referring to one's being aware - it's the ability of awareness. Nothing else really. If you claim that it's "more" than that, and want to show that you're correct, then I need a stronger argument. I think "adding" more leads to all the errors of functionalism and property dualism.

 

I think you still don't get what I am saying.

 

There is no such thing as "true for me".  Agreed?  I assume yes.  This is because something IS true or not.. the "for me" is incoherent/irrational.

 

When something in reality IS conscious, it IS conscious, not for it, nor for me... not just from my point of view or from its point of view... it is a fact of reality from all points of view in the sense that POINTS OF VIEW are irrelevant to the FACT of its existence.  Something IS conscious or it is NOT.  

 

To be clear, I am not saying something can be conscious without a first person experience.  First person experience is a necessary and some would argue sufficient condition or attribute of consciousness.  Conscious things experience their consciousness by means of a first person view, i.e. in the manner of a first person view... but that does not mean consciousness IS the view by which they experience it.

 

If the concept consciousness referred to the first person view as such, i.e. the experience of being aware, it would be incoherent to say THAT thing, THAT person is CONSCIOUS... why?  Because it would not BE CONSCIOUS TO THE SPEAKER... this would be an irrational outcome.

 

Can you genuinely try to see what I am getting at?

Edited by StrictlyLogical
Link to comment
Share on other sites

"If the concept consciousness referred to the first person view as such, i.e. the experience of being aware, it would be incoherent to say THAT thing, THAT person is CONSCIOUS... why?  Because it would not BE CONSCIOUS TO THE SPEAKER..."

 

Right, you are not and cannot be aware of my own consciousness. You can still determine that someone is conscious just as you can determine I'm not a computer. Otherwise, you're saying the zombie hypothesis is possible: it's possible to appear conscious without being conscious, so we must include more to the concept for it to be useful. Saying that consciousness as a concept ALSO includes the mechanisms it operates under doesn't help, as it quickly becomes functionalism.

 

I see what you want to get at, but I lose you when you say "but that does not mean consciousness IS the view by which they experience it". I think this is wrong. When someone is conscious, it means they have a faculty of awareness, that's all it means. A faculty in this sense refers to the experience itself. A faculty of sight is the same: the experience of seeing. That's all sight IS!

Link to comment
Share on other sites

To give us a baseline to work with, I would say that a calculator does not have this first person experience and is therefore not conscious. To the extent that it gives any appearance of intelligence at all (e.g. accurately completing mathematical operations or plotting graphs), it is functioning as your mechanical zombie.

That's a good idea.  And I agree; no matter what intricate equations a calculator "solves" it can't really be said to understand mathematics, so that's a good place to start.

 

Eliza was not conscious, I don't think. She was akin to the calculator: naught but pre-programmed responses.

Exactly.  And I would go so far as to say that anyone who spent some time "talking to" and analyzing her would probably come to the same conclusion- if they were doing so rationally.  And yet. . .

ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked.

http://en.wikipedia.org/wiki/ELIZA

 

So when Eliza's descendants score well on actual Turing tests I don't think that's a flaw in the design of the test itself.

 

How will we know that "we're not dealing with a calculator anymore"?

I think we'll know that in the same way that I know you aren't a calculator; we will be able to ostensibly point to evidence of cognition (most likely through language).  I don't know when that would happen and I'm not entirely sure how yet, but when we are no longer dealing with a calculator I think it will tell us so.

 

Could a program have enough power, and be programmed sufficiently, such that it could not be conscious and yet pass the "Turing test" as administered by you or me? Could any programmer be so cunning? Or do you deem that impossible, because if some program passes the Turing test, then ipso facto it must be conscious, regardless of the programmer's methods or intentions?

No, I don't think that's possible specifically because of the programmer's methods.  

I focus on chatbots because, of all the programs that exist today, they are the only ones designed to actually seem conscious (and which aren't).  Chatbots (like http://en.wikipedia.org/wiki/Eugene_Goostman) are designed to replicate speech, just like Eliza does, through a system of production rules (if-then statements).  Now, the number of conditions for any production rule can vary.  For example, I could write the rule:

 

IF ( user types: "I want . . . " ) THEN (respond: "sometimes I want . . . too" );

 

which might give the appearance of conversing with a listening, sympathetic being, unless you said anything unpredictable (such as "I want you to die," to which this comforting chatbot will immediately reply "sometimes I want you to die, too.").  Now, I could remedy that by adding a few more things to the conditional side of the production rule:

 

IF (( user types: "I want. . . " ) AND ( NOT- (user types "I want you. . . ")) THEN (respond: "sometimes I want . . . too" );

 

which would do exactly the same thing unless you said anything beginning with "I want you. . .", in which case the chatbot would say nothing at all and could actually misbehave (anything from a barely-noticeable glitch to crashing) unless I included another production rule, explaining what it should do in that case.

 

So that's what you have to do in order to simulate a conversation, and program it to not reveal itself as a simulation; you have to program each of its responses (or teach it how to put together new responses, which would require you to program the entire English grammar and syntax) as well as how it should apply them to any given input. 

And even if someone made a perfect chatbot, which contained however-many billions and billions of production rules and could simply converse fluently in English, it would still use words without any reference to reality.  For example, it could use the word "house" in a proper sentence, but it could not find any error in saying "I took my house on the bus with me" - to understand why that's wrong, you would have to know what a "house" is in reality.

 

If they programmed a MZ to understand words as references to something in reality, then that might seem conscious to me.  But anything less than that simply cannot be convincing for any length of time.

 

In this very case, as you are positing a "mechanical zombie which [isn't] conscious," maintaining that context may serve us well, as it is only this which will allow us to discern the truth of the matter.

Alright.  I suppose it would be better stated as this:

 

If there were ever a mechanical zombie which was so realistic that I could not tell the difference, then I would see no purpose in maintaining that there is a difference; it would be like trying to think of solid matter as empty, in direct contradiction to my own eyes, in accordance with the fact that almost everything is empty on the subatomic level.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

I do know there are at least three things which would be absolute requirements for a true AI:

  1. It would have to be set up with layers of control, where only the bottom layer actually does things (both with input and output) and each upper layer both receives input from (sort of watches) and sometimes sends commands to the layer directly beneath it.  Without that it would have no way to gather or store information about itself; it would have no memory and no introspective capacity.
  2. It would have to be able to actually modify itself, in order to truly "learn" (databases just can't do what the human mind does, when it learns).  We can do this right now.  As a matter of fact, self-modifying code plays a huge role in cyber-security today (both making and breaking it) but, as you can imagine, it's not something you can throw together very quickly or easily.
  3. It would have to not only learn by modifying its own internal structure (in some tightly-controlled fashion); these modifications would have to allow for the formation of concepts, through measurement-omission.  And I do not believe anyone has ever done that before.
Link to comment
Share on other sites

When something in reality IS conscious, it IS conscious, not for it, nor for me... not just from my point of view or from its point of view... it is a fact of reality from all points of view in the sense that POINTS OF VIEW are irrelevant to the FACT of its existence.  Something IS conscious or it is NOT.  

Yes, consciousness is conscious. In order to form one's concept of consciousness, the 1st person consciousness needs to refer to a 3rd person consciousness, while you understand that the 3rd person consciousness is really a 1st person consciousness for each corresponding person. The consciousness you perceive (or are aware of) is metaphysical but also inherent to a body. I think this is a problem of intrinsically real universals. Do all the universals we perceive also exist independently from us? I think they exist only when we exist, but this is strange. For example, there would be no universal of length without having humans first discover it. But once discovered, it can be viewed as an inseparable property of various concretes.

Edited by Ilya Startsev
Link to comment
Share on other sites

So when Eliza's descendants score well on actual Turing tests I don't think that's a flaw in the design of the test itself.

You don't? Maybe I misunderstand you, and perhaps you can elaborate, but if Eliza's descendants "score well" or "pass" (however that's reckoned to happen) -- and if we're agreed that Eliza (and programs devised as Eliza, i.e. "pattern matching") is as a calculator/mechanical zombie -- then isn't the test failing to accomplish the very thing we want it to do?

If the Turing test cannot necessarily identify the very thing we're looking for, and if it is apt to give false positives (as with the Goostman program and its supposed "victory"), then don't we need a better test?

 

I think we'll know that in the same way that I know you aren't a calculator; we will be able to ostensibly point to evidence of cognition (most likely through language).  I don't know when that would happen and I'm not entirely sure how yet, but when we are no longer dealing with a calculator I think it will tell us so.

Well, I think we're in the same boat when you say "I don't know when that would happen and I'm not entirely sure how yet." :)

But for the rest, I must quibble. Because if I were to program an AI in an attempt to fool you into thinking that it was conscious, the first thing I might have it do is to tell you that it is not a calculator.

And furthermore, I disagree with you that you know that I'm not a calculator by having applied some informal version of the Turing test on me. I think you know that I'm not a calculator because you know me to be a human being, and you know that the nature of a human being is to be conscious.

For after all, you have observed -- and I agreed -- that there are several people in the world who... kinda sorta sound like Eliza or Goostman. People who don't respond directly to the things you say, don't answer the questions you ask, repeat themselves, deflect, evade, and respond routinely with nonsense. I'm certain you well know what I'm talking about. Yet even in the worst case, we don't question the basic consciousness of the person with whom we speak. (If it were a question -- if this forum were a gigantic Turing test laboratory -- well... it would explain a lot, actually.) We do not question the consciousness of our discussion partners because we know that consciousness belongs to human beings without regard to whether their conversation "sounds human" to us or is otherwise sensible, or even intelligible.

If, however, we were to hold every entity to some version of a Turing test before granting it "consciousness," I expect many humans (who are conscious, after all) would fail, and it seems to me that we may expect some calculators (who are not conscious) to pass.

 

No, I don't think that's possible specifically because of the programmer's methods.  

I focus on chatbots because, of all the programs that exist today, they are the only ones designed to actually seem conscious (and which aren't).  Chatbots (like http://en.wikipedia.org/wiki/Eugene_Goostman) are designed to replicate speech, just like Eliza does, through a system of production rules (if-then statements).  Now, the number of conditions for any production rule can vary.  For example, I could write the rule:

 

IF ( user types: "I want . . . " ) THEN (respond: "sometimes I want . . . too" );

 

which might give the appearance of conversing with a listening, sympathetic being, unless you said anything unpredictable (such as "I want you to die," to which this comforting chatbot will immediately reply "sometimes I want you to die, too.").  Now, I could remedy that by adding a few more things to the conditional side of the production rule:

 

IF (( user types: "I want. . . " ) AND ( NOT- (user types "I want you. . . ")) THEN (respond: "sometimes I want . . . too" );

 

which would do exactly the same thing unless you said anything beginning with "I want you. . .", in which case the chatbot would say nothing at all and could actually misbehave (anything from a barely-noticeable glitch to crashing) unless I included another production rule, explaining what it should do in that case.

 

So that's what you have to do in order to simulate a conversation, and program it to not reveal itself as a simulation; you have to program each of its responses (or teach it how to put together new responses, which would require you to program the entire English grammar and syntax) as well as how it should apply them to any given input. 

And even if someone made a perfect chatbot, which contained however-many billions and billions of production rules and could simply converse fluently in English, it would still use words without any reference to reality.  For example, it could use the word "house" in a proper sentence, but it could not find any error in saying "I took my house on the bus with me" - to understand why that's wrong, you would have to know what a "house" is in reality.

 

If they programmed a MZ to understand words as references to something in reality, then that might seem conscious to me.  But anything less than that simply cannot be convincing for any length of time.

I agree with you completely about the difficulty and enormity of the task, programming-wise, to devise a program such that it could fool you or me for any length of time (though other prospective judges may have different thresholds). I only balk at the seeming implication that "therefore it's impossible." I don't know what the future may hold for computer programming, but I know that I've been impressed with some of the displays I've seen (such as Watson's victory on Jeopardy). I expect to continue to be impressed and surprised, and I may even be impressed and surprised beyond my current expectations.

 

 

Alright.  I suppose it would be better stated as this:

 

If there were ever a mechanical zombie which was so realistic that I could not tell the difference, then I would see no purpose in maintaining that there is a difference; it would be like trying to think of solid matter as empty, in direct contradiction to my own eyes, in accordance with the fact that almost everything is empty on the subatomic level.

I agree so far as this: if we see no difference, then we should not act as though there is a difference. And should a mechanical zombie present itself, and if I did not know that it was a mechanical zombie, but it appeared and acted in every measure like a conscious entity (i.e. a human being), then I would never have any cause to question it.

And yet my sticking point is that the knowledge that something is a mechanical zombie is itself a salient difference with respect to the very matter we're trying to resolve. I mean, suppose for a moment that some programmer achieved the Herculean task we've set for him above -- in the manner of Eliza, "a system of production rules," but one so complete and so robust that we cannot break it in conversation. In blind Turing test formats, it always passes, even when we administer the test according to our own criteria.

Well, what then? Are we forced to say, "Well, I guess it has a first person experience, then, just like a human being"? Or do we yet know that, being essentially a calculator with more horsepower -- Eliza with a greater nest of IF-THEN responses -- that the program remains a mechanical zombie, and has no first person experience akin to a human being?

I mean... I agree with you that the underlying methodology programmers use with respect to chatbots is flawed. Not because I know that they will never succeed in creating something that can pass the Turing test. Hell, as far as I'm concerned, that's precisely what they're aiming to do, and I fully expect that they'll achieve greater and greater success with it. Maybe they'll fool me one day. They tend to be bright and proficient as a rule.

But I think that their approach is flawed because I do not believe that a more powerful calculator, or a wider array of programmed responses, is any way to create a true intelligence or a consciousness, such as the consciousness that I possess and infer other human beings to possess as well. If there were some means to create a true AI (and I believe that there must be), I think we would first have to set about trying to discover the nature of human consciousness. If we could understand how our own consciousness developed, and how it operates, then perhaps we could achieve something similar artificially. How then to test for success? I don't know precisely, but I bet it would not be the Turing test. I'd hope that a greater understanding of the nature of the subject matter itself would allow for a more suitable test.

Incidentally, I believe that I understand your example of solid matter, but I take some issue with it. When we typically speak of solidity or emptiness, we are not referring to the subatomic level of reality, but the reality of our everyday perceptions. That's the context for our use of those terms, and how they're ordinarily meant to be understood. It is not the case that those things which "appear to your eyes" to be solid are not "really" solid. They really are solid -- they are everything that concept entails -- regardless of what takes place on a subatomic level. Whatever happens on a subatomic level is ultimately what allows for the experience that we have when we touch a table or a tree, which is what we mean by "solid."

But when we're talking about consciousness, I maintain that there is a real difference between "appearing conscious" and being conscious, because the first person experience is a real phenomenon, and something which does not have that experience is not conscious according to how I use the term.

It's like... imagine some food photographer who creates plastic replicas of food for her pictures. Would we ever say, "if she creates her replicas such that I look at them, and cannot distinguish them from real food, then I must conclude that they are real food?" No. Not even if she added perfume to make them smell real, or compounds to make them taste real. Knowing that they are comprised of plastic means that howsoever they appear in terms of sight, smell, or taste, we know that they are not actually food. Just as we know that consciousness entails more than being able to give programmed responses to particular inputs, which is possible to a calculator (what's missing being a first person experience), being food means more than a mere appearance of food -- it must react with the body in some specific way.

And yes, if she could further alter the plastic such that it would react with body chemistry just as the food itself does, then 1) it is food at that point and 2) it likely is no longer plastic; but at a minimum this would require a knowledge of the nature of food and nutrition akin to the knowledge of consciousness I argue is necessary for there to be any hope of reproducing it artificially. Until that point, even if we say, "well, it looks like food, smells like food, and tastes like food," we cannot say that therefore it is actually food. Knowing it to be plastic, albeit plastic which the food photographer is trying to make appear as real food, allows us to know that it is not real food in fact. A chatbot that sounds human and passes a Turing test is not therefore conscious; it is a chatbot that sounds human and passes a Turing test.

Link to comment
Share on other sites

If the Turing test cannot necessarily identify the very thing we're looking for, and if it is apt to give false positives (as with the Goostman program and its supposed "victory"), then don't we need a better test?

Yes and no.  Yes, since the Turing test has been susceptible to false positives, it needs improvement.  However, I think that the basic principle behind it is sound; in that sense we don't need a "better" test because we don't need an essentially different test.

I blame the judges.  I say that because every chatbot which has come close to getting a false positive did so, not by giving any real appearance of thought, but by employing some sort of gimmick to make the judges feel something towards it.  I don't think such tricks could work except on someone who uses emotions as tools of cognition (which speaks directly to results like Goostman's).  So I think that the test would function properly if it utilized judges with a better epistemology.

Because if I were to program an AI in an attempt to fool you into thinking that it was conscious, the first thing I might have it do is to tell you that it is not a calculator.

Yep. And I would ask it something like "is a shoebox bigger than Mount Everest?" and then it would tell me whether it was a calculator.  B)

 

And furthermore, I disagree with you that you know that I'm not a calculator by having applied some informal version of the Turing test on me. I think you know that I'm not a calculator because you know me to be a human being, and you know that the nature of a human being is to be conscious.

Well, since I've never met you in person I have only one way of knowing that you are a human being, and that is through your words.

So on what basis do I infer a human being to be the cause of what you've posted (as I do) but not what Eliza posts, if not by first inferring your consciousness?

 

Yet even in the worst case, we don't question the basic consciousness of the person with whom we speak.

Isn't consciousness a volitional process?  I've met a lot of people whom- while they always have the capacity for consciousness, by virtue of being human- I seriously doubt have truly been conscious, during any time that I've ever spoken to them.  I think they're aware of the world around them, in the same sense that animals are, and not much else; not really conscious.

 

Of course, consciousness is not a constant and unchanging thing; I would fail the Turing test too, if it was administered at 3:00 AM.  And to be perfectly honest, I'm not usually conscious at 3:00 AM, regardless of what responses I might give to any stimulus in particular.

But in general, those times when I am truly conscious, I like to think that it's evident in my words and actions.  And for some people the evidence available simply doesn't support such a conclusion.

 

But when we're talking about consciousness, I maintain that there is a real difference between "appearing conscious" and being conscious, because the first person experience is a real phenomenon, and something which does not have that experience is not conscious according to how I use the term.

There is a thought experiment, called the Chinese Room.  It runs as follows:

---

 

Suppose you were placed into a room, empty except for a Chinese-to-Japanese dictionary and without openings, except for two slots in the walls.  And periodically someone outside of the room would slide strips of paper in, with Chinese characters on them for you to translate.

At first, it might take you a while to find the appropriate character in your dictionary and replace it with the appropriate Japanese character; you would be slow and make many mistakes.  As time went by, however, you would eventually become very skilled at it.

People outside of the room could slide long and sophisticated strings of symbols in for you, and you could return equally sophisticated answers in no time at all; casual onlookers might marvel at your mastery of the Chinese and Japanese languages.

And yet, inside the room, you would never have learned what a single symbol actually meant.

---

 

Does that sort of hit on the distinction between "appearing conscious" and "being conscious"?

Edited by Harrison Danneskjold
Link to comment
Share on other sites

Yes and no.  Yes, since the Turing test has been susceptible to false positives, it needs improvement.  However, I think that the basic principle behind it is sound; in that sense we don't need a "better" test because we don't need an essentially different test.

I blame the judges.  I say that because every chatbot which has come close to getting a false positive did so, not by giving any real appearance of thought, but by employing some sort of gimmick to make the judges feel something towards it.  I don't think such tricks could work except on someone who uses emotions as tools of cognition (which speaks directly to results like Goostman's).  So I think that the test would function properly if it utilized judges with a better epistemology.

Hmm... thinking more and more about this... I believe my ideas are starting to come together a little bit better. (But let's see. :) )

The first person experience, which is what I mean when I refer to the fact of my own consciousness, is something real. Since I only have a direct awareness of my own consciousness, I must infer consciousness in all other entities which possess it, to the best of my ability. I deem other humans to be conscious because I judge them to be like me, in origin and in form, so it follows that they should be like me in function, as well. As I have a first person experience, I conclude that you must have one, as well.

Current efforts to devise AI, it seems to me, are aiming at making calculators "seem conscious" by refining their ability to respond "appropriately" to certain inputs (though the nature of those responses is not actually different). The Turing test is likewise geared to determining that which "seems conscious," and I think this cycle is mutually re-enforcing.

So I may agree that better judges could better determine which programs "seem conscious," and hold programmers to a higher standard (provoking a more and more sophisticated version of Eliza), but this does not cross the divide to testing for actual consciousness -- a genuine first person experience -- which is, again, what I mean when I speak of my own consciousness or yours.

Since we still will not be able to access the first person experience, or consciousness, of another (including an Eliza), I believe that we must focus our efforts (both in creation and in testing) on understanding the root nature of consciousness, and then replicating that. It will not do to simply try to mimic the effects of consciousness, as Eliza or Goostman, but we must get at the underlying causes of consciousness itself.

Which ultimately strikes me as a sensible conclusion. If we would like to reproduce consciousness, we must first come to understand it -- how it comes to exist in nature -- in the first place. No small task.

 

 

Well, since I've never met you in person I have only one way of knowing that you are a human being, and that is through your words.

So on what basis do I infer a human being to be the cause of what you've posted (as I do) but not what Eliza posts, if not by first inferring your consciousness?

Because *I* expect that the expectation you bring to a forum such as this is that other users here are human beings. (I think it's a fair assumption and I wouldn't counsel otherwise. :) )

 

Isn't consciousness a volitional process?  I've met a lot of people whom- while they always have the capacity for consciousness, by virtue of being human- I seriously doubt have truly been conscious, during any time that I've ever spoken to them.  I think they're aware of the world around them, in the same sense that animals are, and not much else; not really conscious.

I think it's sensible to talk about aspects of consciousness as being "volitional," in terms of focus, awareness, and thought. I may choose to think about something specific, to think about it deeply, or to diffuse my thoughts and direct my focus away from discomfort... But I don't think that we can conclude that, therefore, a given human being may be "not really conscious" at all, in terms of his individual nature. (And remember: with the Turing test, and more generally speaking, we are trying to determine the nature of an entity; not merely whether Eliza is "possibly conscious, just too tired to respond to our questions at the moment," but whether it is a conscious entity.)

Even the dimmest bulb walking around is a conscious entity, whatever poesy or rhetoric we use to describe them. It may sometimes be accurate (if possibly impolite) to say that someone "acts as though he were unconscious," or even that someone "seems brainless," so long as we keep in mind that human beings: have brains; and are conscious creatures.

 

Of course, consciousness is not a constant and unchanging thing; I would fail the Turing test too, if it was administered at 3:00 AM.  And to be perfectly honest, I'm not usually conscious at 3:00 AM, regardless of what responses I might give to any stimulus in particular.

There are many things which could affect one's performance on a Turing test, and yes, there are varying levels of awareness available to a man (such as being drunk, or deeply tired, or even actual unconsciousness as when asleep or medical conditions like a coma).

 

But in general, those times when I am truly conscious, I like to think that it's evident in my words and actions.  And for some people the evidence available simply doesn't support such a conclusion.

Which speaks to my point, I believe. If the "evidence available," qua Turing test, does not support the conclusion that a given human being is a conscious entity, then it is the test which is suspect, because human beings are conscious entities by the nature of what a human being is.

That you would pass my Turing test -- and you would -- speaks well of you, I believe. But it is not required for me to grant that you are a conscious entity, as you are a human being. Similarly, there are people here on this board that I might not deem human in the context of a Turing test, but I do not doubt that they are human, and I do not question whether they are conscious entities.

Remember: it is a fact that humans are conscious. Artificial intelligence (at least in part) is an attempt to replicate human consciousness, and it proceeds from the starting point that humans are conscious. The tests that we devise are meant to discover whether we've succeeded or failed in that endeavor. If the tests do not do what they are meant to do, and have the potential to misidentify conscious creatures (like humans) as not conscious, and not-conscious programs (like Goostman) as conscious, then the test is flawed. It is doing things exactly backwards to instead conclude that, the test being unassailable, Goostman is a conscious entity and some dumb human is not.

 

 

There is a thought experiment, called the Chinese Room.

[...]

 

Does that sort of hit on the distinction between "appearing conscious" and "being conscious"?

Yes, I believe that it does.

***

Incidentally, I watched the video you'd embedded. It's funny, and I'm certain you're aware that it's satire -- thus it's actually an example of some fair amount of intelligence. It's not all that easy to act dumb.

Which led me to wonder... what if Eliza was conscious from the get-go, and simply has been responding as she has as a put-on? Giving us what we'd expect a "mechanical zombie" to sound like, as a gag, or perhaps some long-term strategy ending in Skylab and HAL and other unpleasantness? What if a computer program opts not to pass the Turing test?

But I wouldn't worry, because I don't think such outcomes are possible based on our approach to AI and programming methodology, which I believe will never result in actual intelligence or consciousness or the first person experience. That the Turing test (as I understand it) would be powerless against such a ruse, if well executed, now strikes me as par for its course. I am increasingly convinced that it is not good for much.

Link to comment
Share on other sites

If the tests do not do what they are meant to do, and have the potential to misidentify conscious creatures (like humans) as not conscious, and not-conscious programs (like Goostman) as conscious, then the test is flawed.

Absolutely.

 

Yes, I believe that it does.

Excellent.  Now, given that the person in the Chinese room may appear to understand Chinese without actually doing so, I think there would be ways to demonstrate that very simply.  For example, you could give the person in the room a new character within a string of familiar characters, for them to infer its meaning from its context; they would not be able to.  You could give them some idiomatic statement to translate; they would not be able to.

 

In fact, they could not generalize from, deduce from or paraphrase any Chinese statement.  They would not be capable of doing quite a few things because they would have no idea what any of it meant.  And such behavior is also exhibited, on a daily basis, by actual human beings.  So. . .

 

Even the dimmest bulb walking around is a conscious entity, whatever poesy or rhetoric we use to describe them.

I don't think I agree.  I'm not ready to fully express my reasoning yet, though; I need to think about it some more.

 

I'll respond in greater detail once I am ready.

Link to comment
Share on other sites

But it is not required for me to grant that you are a conscious entity, as you are a human being.

 

I don't think that consciousness is inferred from humanity.  I really think it works the other way around:

 

http://forum.objectivismonline.com/index.php?showtopic=27145&page=4

Link to comment
Share on other sites

I don't think that consciousness is inferred from humanity.  I really think it works the other way around:

 

http://forum.objectivismonline.com/index.php?showtopic=27145&page=4

I don't know what I'm meant to find in that other thread, but maybe you can clarify what you're trying to say, here?

Do you mean to say that some people... aren't actually human, because they do not display the requisite evidence of consciousness (i.e. they do not pass your "Turing test")?

That... doesn't seem right to me, but if that's not what you mean, then I need this clarified. In fact, let's try to crystallize our disagreement to the extent that we can:

I say that I know of my own consciousness and that leads me to infer consciousness in other human beings, regardless of whether or not I see outward evidence of it, in a given individual. (In short, if I know that you are human, then I know that you are conscious.)

Contrary to this, I am taking you as saying that you do not know whether any given person is conscious until you have specific individual evidence (meaning: they pass your informal version of a Turing test). And... maybe that an individual who fails to pass this Turing test is not actually human (perhaps in some "full sense" of the term that you intend)?

Is this close to correct in describing our positions? Or do I have this wrong?

Link to comment
Share on other sites

I don't know what I'm meant to find in that other thread, but maybe you can clarify what you're trying to say, here?

The OP of that thread argued that while we can know that other people have minds, we cannot know about their mental content.  I do not believe that a mind can transpire apart from some mental content; consciousness is conscious of something.

In that thread, I argued that anything which demonstrates an active consciousness implies something about its activities.  In this thread part of what I am arguing is the converse; that any proof of thinking also necessitates a thinker.

 

Contrary to this, I am taking you as saying that you do not know whether any given person is conscious until you have specific individual evidence (meaning: they pass your informal version of a Turing test).

That was my contention, yes.  However, that was wrong; I was advocating a refusal to induce from valid knowledge.  By the time we form the concept of a "human" we understand that they are conscious, and so that would be the necessary inference.

 

(In short, if I know that you are human, then I know that you are conscious.)

 

That still seems wrong to me.

Epistemologically, it isn't necessary to prove anyone's consciousness in order to infer its existence; that would essentially be a flat-out refusal to conceptualize.  However, I still feel very strongly that, factually speaking, not all homo sapiens are conscious (nor truly "human").

 

In reviewing all of the specific people I think that of, I find it very difficult to think of them that way (although I can't put my finger on why).  I'm not sure whether that's valid, yet.

 

But I am sure that proof of an entity's conceptualization would also prove its consciousness (regardless of what would be required to prove it).

Edited by Harrison Danneskjold
Link to comment
Share on other sites

 However, I still feel very strongly that, factually speaking, not all homo sapiens are conscious (nor truly "human").

 

In reviewing all of the specific people I think that of, I find it very difficult to think of them that way (although I can't put my finger on why).  I'm not sure whether that's valid, yet.

This is a pretty slippery slope to head down - de-humanizing people because you feel that you are smarter than them.  I do know what you mean (you are talking about everyday people that you meet, and not sociopaths) but not everyone is going to want to spend time arguing the ontological merits of Aquinas vs. Occam.

Link to comment
Share on other sites

Epistemologically, it isn't necessary to prove anyone's consciousness in order to infer its existence; that would essentially be a flat-out refusal to conceptualize.  However, I still feel very strongly that, factually speaking, not all homo sapiens are conscious (nor truly "human").

Jeez... Consciousness is not equal to intelligence... It seems like you're saying thinking stupidly is being non-conscious, which makes no sense. That is, unless you accept that consciousness is purely materialistic...

Link to comment
Share on other sites

That's not what I mean. 

I've watched some people watching professional wrestling before, in the attempt to understand its entertainment value.  They didn't seem to find it any more interesting than I did, nor did they seem to mind it; they just stared at it, with the same stares that you might get from cows.  When I asked about it explicitly, they just shrugged.

Later, when I asked one of them what he had been thinking about during the episode, he declared that "not everyone likes to think".

 

That's what I'm referring to.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...