Welcome to Objectivism Online Forum

Welcome to Objectivism Online, a forum for discussing the philosophy of Ayn Rand. For full access, register via Facebook or email.

Harrison Danneskjold

Zombies and Artificial Minds

Rate this topic

116 posts in this topic

The problem is that it has no idea what these things are.

How do you know?

Having ideas are one of the things a mind does. I'd grant that modern computers are, in fact, mindless because if we really look at their actions, they almost invariably behave very -for lack of a better word- stupidly. They do things that no thinking person would ever do.

However, I find it entirely plausible that this won't always be the case. Far from it; within my lifetime I intend to write a program that'll stump Don Athos and Eiuol.

The primary thing about it is that if something walks like a person, talks like a person and holds philosophical discourse like a person then to say it is not, in fact, a person (against all evidence) is to say that it's a zombie.

I don't have everything about that worked out yet but I do know that there is something very wrong with the idea of a zombie.

 

*** MOD NOTE: Split from here. ***

Edited by Eiuol

Share this post


Link to post
Share on other sites

The primary thing about it is that if something walks like a person, talks like a person and holds philosophical discourse like a person then to say it is not, in fact, a person (against all evidence) is to say that it's a zombie.

Harrison,

Do you think it possible that 10 different groups of programmers could write 10 different AI programs that, when running, would not only be indistinguishable from one another, but from a human?  Is there an infinite number of programs that could be written that, when run, would be indistinguishable?  Would the programs ever need to be upgraded?  Or debugged?

Share this post


Link to post
Share on other sites

Do you think it possible that 10 different groups of programmers could write 10 different AI programs that, when running, would not only be indistinguishable from one another, but from a human? Is there an infinite number of programs that could be written that, when run, would be indistinguishable? Would the programs ever need to be upgraded? Or debugged?

I'm sorry; I'm pushing 48 hours without sleep and I strongly suspect I've missed your point.

Could you be more explicit, please?

Share this post


Link to post
Share on other sites

Do you think it possible that 10 different groups of programmers could write 10 different AI programs that, when running, would not only be indistinguishable from one another, but from a human?

I don't see any reason why it couldn't be.

Is there an infinite number of programs that could be written that, when run, would be indistinguishable?

Absolutely. That runs along the same lines as "how many different sentences could be written to convey a single meaning" and that's clearly infinite.

That's obviously infinite.

That clearly isn't finite.

It's demonstrably false that any upper bound can be put to that.

(et cetera)

Would the programs ever need to be upgraded?

Not if they're indistinguishable from a human mind; we upgrade ourselves.

Or debugged?

Maybe. There are some people who should probably be debugged.

I still don't get what you're driving at.

Share this post


Link to post
Share on other sites

Far from it; within my lifetime I intend to write a program that'll stump Don Athos and Eiuol.

 

I don't mean to discourage any of your efforts -- and I wish you great success in them -- but just so that it's said, stumping me isn't (or oughtn't be) the criterion for establishing the reality of an "artificial" consciousness.  With respect to Turing, neither is stumping X number of people.

 

The primary thing about it is that if something walks like a person, talks like a person and holds philosophical discourse like a person then to say it is not, in fact, a person (against all evidence) is to say that it's a zombie.

 

We might find greater success (and perhaps could even do something with reasonable accuracy today) in trying to replicate something like a human infant.  I mean, for the purposes of "stumping" or "fooling" others, maybe we could construct a mechanical baby, such that it "lies around like a baby, babbles like a baby and drools like a baby" to the satisfaction of any or all.

 

Yet I would continue to say that it is not a baby, which I contend is not "against all evidence," because "all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

splitprimary likes this

Share this post


Link to post
Share on other sites

...

 

Yet I would continue to say that it is not a baby, which I contend is not "against all evidence," because "all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

 

This is an interesting question...  Is a metaphysical creation substantially different than a man-made copy?

 

Nature creates life

Man is a living creation

Man creates artificial life

 

Is artificial life not natural in origin, coming to be via a natural process called Man?

 

Edit:  Would it not also be called a child of Man??

Edited by Devil's Advocate

Share this post


Link to post
Share on other sites

Can two different things be indistinguishable from one another?

Share this post


Link to post
Share on other sites

Trick question; how can two different things be indistinguishable?  However, depending on how many shades of gray you're willing to consider...

 

Twins and clones are nearly indistinguishable as genetic pairs.  According to some quick research on my part, skin cells can be identical (or identical 70% of the time).  So I think it's at least philosophically possible with certain qualifications related to A=A.  But that may be setting the bar too high if the goal is only to establish if two objects are the same kind of thing.
--
"Cell division produces daughter cells that are genetically identical to each other, as well as to their parent cell, which no longer exists. Being genetically identical to their parent cell helps the new cells function properly. A skin cell, for example, divides and produces skin cells genetically identical to it."
http://www.apelslice.com/books/9780618843175NIMAS/HTMLOUT/HTML/c_id4613321.html
--
“We saw that 30 percent of skin cells harbor copy number variations (CNV), which are segments of DNA that are deleted or duplicated. Previously it was assumed that these variations only occurred in cases of disease, such as cancer. The mosaic that we’ve seen in the skin could also be found in the blood, in the brain, and in other parts of the human body.”
http://news.yale.edu/2012/11/18/skin-cells-reveal-dna-s-genetic-mosaic
 

Share this post


Link to post
Share on other sites

Do we really want to use the debate technique of using arbitrary assertions that are "cherry-picked" to fit the point of the person making the assertion?  A thread discussion technique used on common philosophy forums - with the arguments falling into the specifics of one or more of the specific assertions made to prove or disprove a point - and the real philosophical issue (usually a concept) falling away altogether?

 

Someone reiterate the OP premise at discussion, or the legitimate side issue that spurred this thread.  What again, are we talking about here? 

Share this post


Link to post
Share on other sites

The Turning Test is premised on an AI being indistinguishable from a Human.  If two different things (above the atomic/molecular level perhaps?) cannot be indistinguishable from one another, other than by location, then is the Turing Test a valid test for AI?  If two different things can be indistinguishable, then what does this say about A is A and the Law of Identity?

 

Is it valid to say that the only thing that can think like a human is a human, referencing nothing more than the Law of Identity?  There are many different consciousnesses on this planet i.e. dolphins, bats, dogs, etc. and each one is distinguishably different - not just by genus, but also by species and, of the more complex animals, by individuals. 

Edited by New Buddha

Share this post


Link to post
Share on other sites

 

We might find greater success (and perhaps could even do something with reasonable accuracy today) in trying to replicate something like a human infant.  I mean, for the purposes of "stumping" or "fooling" others, maybe we could construct a mechanical baby, such that it "lies around like a baby, babbles like a baby and drools like a baby" to the satisfaction of any or all.

 

Yet I would continue to say that it is not a baby, which I contend is not "against all evidence," because "all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

 

Without going so far as to try anything extreme on a suspected mechanical baby like cutting it open, I think so far other more mundane things would give it away as not actually being a baby still. We probably couldn't power it with food. Even if we could clear that hurdle, eventually it would be discovered as an imitation when it failed to grow and develop like humans do.

Share this post


Link to post
Share on other sites

The Turning Test is premised on an AI being indistinguishable from a Human.

The Turing test is premised on a test subject communicating with a computer, through a terminal, and not being able to tell that it's not a human.

 

It's not meant to demonstrate that a human and a computer have the same identity. It's meant to demonstrate that they have ONE common attribute: intelligence.

 

There's no reason to bring the Law of Identity into this.

Share this post


Link to post
Share on other sites

Even if we could clear that hurdle, eventually it would be discovered as an imitation when it failed to grow and develop like humans do.

This made me realize that part of accepting the zombie thought experiment - that it is metaphysically possible for there to be an entity that is in all ways identical to a human except that it lacks mental states - misses everything about development. It would not be possible to even reach a truly conceptual mind unless it developed its concepts over time as humans in fact do. Maybe in a single instant, a mindless entity can look identical to a mindful entity, but if you engage it again tomorrow, all you need to look at is how or if it developed its concepts.

Share this post


Link to post
Share on other sites

What again, are we talking about here?

Read the thread it was split from.

This made me realize that part of accepting the zombie thought experiment - that it is metaphysically possible for there to be an entity that is in all ways identical to a human except that it lacks mental states - misses everything about development.

Exactly! Can a mindless mechanism do exactly what a mind does, without itself having a mind?

Can two different things be indistinguishable from one another?

Depends on what counts as "things" and how we're distinguishing. It's about what we're focusing on; what's relevant and what isn't. To that end I think the video you posted in the last thread is right; what's relevant are actions (which are ultimately just motions).

I'll come back to that, though.

Yet I would continue to say that it is not a baby, which I contend is not "against all evidence," because "all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

No; it wouldn't be a baby, at all. Babies poop *all* the time!

I'll address the rest at length, momentarily.

Share this post


Link to post
Share on other sites

... stumping me isn't (or oughtn't be) the criterion for establishing the reality of an "artificial" consciousness...

... I mean, for the purposes of "stumping" or "fooling" others ...

We seem to have failed to communicate a few things.

Firstly, the Turing test is not about fooling anybody, regardless of how it may have been abused. The only philosophical content in it is an identification (whether true or false) of relevance; that one can determine "consciousness" from nothing more than dialogue.

Now, clearly you disagree with that, but if the test consisted of anatomical dissection instead of conversation it still wouldn't be an attempt to fool anybody.

Secondly, although I intend to stump you this doesn't imply deceit either; I think you're wrong. And it seems to be one of those inductive points that can't be easily put into a syllogism (although I will try), but may be simple to demonstrate ostensibly - if we had any real examples to use.

That's what my intention to stump you means.

Yet I would continue to say that it is not a baby, which I contend is not "against all evidence," because "all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

I'll have to elaborate later, but what about the neuroscientist who denies any evidence that brains can be conscious?

Share this post


Link to post
Share on other sites

"all evidence" here includes our knowledge of the mechanical baby's construction and metaphysical nature -- does it not?

Let's suppose it does. Let's suppose that if we could reduce each of the baby's actions to some clockwork mechanism then this would validly disqualify it from being considered as "conscious".

Now, we know that the mind is what the brain does and that the brain is made out of neurons, each of which is a tiny clockwork mechanism. We have mathematically quantified exactly how they work and why and they're nothing mysterious; only complicated. Therefore, the same reasoning leads us to conclude that brains can't be conscious, either; their composition doesn't allow for it.

This leads us to start looking for some sort of quantum mechanical effects on the brain's activity or for some other kind of stuff (Binswanger's dualism) that could refute such an argument, because we've already accepted its basic premise:

That consciousness is incommensurate to clockwork.

As long as we hold to that we'll be unable to explain how a brain could ever do what it does, because that would mean explaining how clockwork could give rise to consciousness (which, in fact, we've already ruled out). This ultimately boils down to a question of "metaphysical alternatives" which is why I've expressed some hesitancy to get into it, yet. However, this much does seem clear:

The claim that "consciousness" entails literal and metaphysical open-endedness is the claim that it cannot be composed of any *thing* in the entire universe.

Edited by Harrison Danneskjold

Share this post


Link to post
Share on other sites

We seem to have failed to communicate a few things.

 

That's not unusual for online discourse, but neither is it a source of worry.  One of the nice things about this format is that we have an ongoing opportunity to address such failures.

 

Firstly, the Turing test is not about fooling anybody, regardless of how it may have been abused. The only philosophical content in it is an identification (whether true or false) of relevance; that one can determine "consciousness" from nothing more than dialogue.

All right.  I can respect the fact that you see the Turing test as being more than "fooling" (though I think that your earlier language of wanting to "stump" Eiuol and myself evokes that interpretation).  You're right that there's a sincere question involved of how we recognize consciousness in other entities, whether natural or "artificial."

 

Yet if you observe ongoing and historical attempts to satisfy the Turing test, I think you'll agree that people are trying to "fool" others.  If you don't agree to that, then I suppose we'll have to work that out, too.  :)  Regardless, my position continues to be that satisfying some arbitrary threshold of convincing/fooling/stumping X number of people does not mean that the underlying premise is established -- here, that an actual consciousness has been created.

 

It's like those people who create/manipulate artificial food for the purpose of shooting commercials or producing print ads, and saying "well, if 9 out of 10 people who see the commercial think it's real food, then it must be real food!"  But no, "food" made of plastic and glue in that manner isn't food, even if everyone who watches the commercial is convinced otherwise.  Such is the Turing test.

 

And I know that's a well-worn example in this longstanding discussion between us (I can already hear the familiar replies in my mind ;) ), for which I apologize; I've been trying to come up with a new analogy over the last day-and-a-half... and here's what I've managed:

 

You know the old joke about an infinite number of monkeys at an infinite number of typewriters?  (If not, you can find it discussed here.)  Now as a thought experiment, I'm not certain how seriously I take it, but let that pass for the moment.  Suppose it were true and suppose that we somehow managed to get the results that it promises (for after all, if one monkey among an infinite range could do it, that would suggest that a monkey in a finite range could as well; you'd just need to be holding the right lottery ticket).

 

Would we be able to infer that the monkey who typed out Hamlet had the genius of Shakespeare?  Or would we know (largely accounting for having ourselves set up the parameters of the experiment, using monkeys to type practically random sequences) that, these particular results notwithstanding, the monkey is still a simple monkey?

 

I think if we program an entity to give responses which "sound like conscious responses," that that's also what we get -- something that sounds like, or looks like consciousness, but underneath is still a program like Eliza, or Excel, just mindlessly running its code.

 

If we sincerely wanted to reproduce consciousness, I think that there's a lot more that would have to go into it than mimicking its appearance or output.  Primarily we would have to first determine how consciousness arises in the first place.  Or if not -- if we could stumble over it blindly -- then I think that at least replicating the physical structures that appear to produce consciousness in nature would be a better attempt.

 

Now, clearly you disagree with that, but if the test consisted of anatomical dissection instead of conversation it still wouldn't be an attempt to fool anybody.

"The" test?  This may be accounting to my ignorance -- and I invite your correction if so -- but I don't believe that the Turing test does involve "anatomical dissection."  I think that such a measure would be a different test altogether.

 

Would such a test run into many of the same sorts of problems that I find with the Turing test?  Very likely.  Yet I don't believe that my problems with the Turing test are rightly extended to a dismissal of "any possible test," and I don't think that any possible test for artificial consciousness is the Turing test.  My criticisms are meant to be specific.

 

For instance, to return to my own example, I don't think that making plastic and glue "look like food" is the same as making food, not even if you add perfume (to make it "smell like food") and artificial flavoring (to make it "taste like food").  Yet if you understood the nature of how food interacts with the human body, and if you could somehow create something out of plastic and glue that not only "looked like food, smelled like food, and tasted like food," but also interacted with a human body in the digestive and nutritive fashion that food does, well by George... I'd conclude that you've made food!

 

Actually at that point, it might not even be so important whether it "looked" like any familiar food or not...

 

Secondly, although I intend to stump you this doesn't imply deceit either; I think you're wrong. And it seems to be one of those inductive points that can't be easily put into a syllogism (although I will try), but may be simple to demonstrate ostensibly - if we had any real examples to use.

That's what my intention to stump you means.

 

I don't have any concerns about "deception."  But to me, the Turing test endeavor is akin to the stage magician's tricks.  David Copperfield didn't walk through the Great Wall of China, no matter what it may have looked like.  I know that you mean something more substantial than this, but again -- if this is at issue at all (it oughtn't be) -- I have no questions about your character.  And if you asked me to "pick a card, any card," I'd be happy to play along.

 

 I'll have to elaborate later, but what about the neuroscientist who denies any evidence that brains can be conscious?

 

Well, that's a tough nut to crack, isn't it?  It's always a hard case when someone is "denying evidence," and especially the evidence to which they are themselves aware, introspectively.

 

Let's suppose it does. Let's suppose that if we could reduce each of the baby's actions to some clockwork mechanism then this would validly disqualify it from being considered as "conscious".

Now, we know that the mind is what the brain does and that the brain is made out of neurons, each of which is a tiny clockwork mechanism. We have mathematically quantified exactly how they work and why and they're nothing mysterious; only complicated. Therefore, the same reasoning leads us to conclude that brains can't be conscious, either; their composition doesn't allow for it.

 

But that's the thing, isn't it?  We know that the composition of a brain (or more generally speaking, a man) does allow for consciousness.  We know this because we are conscious.

 

I don't argue that a consciousness could never be created "artificially," because any given attempt must involve something "clockwork" and is therefore disqualified; I argue that the Turing test, both as conceived and applied, is not a true measure (and I also believe that our attempts to program a better/more-complicated Eliza are not the right approach; it's akin to setting monkeys at typewriters and hoping that this somehow turns one of them into a playwright).

 

Hell, people create consciousness all the time, but instead of using lines of code to do it we use sperm and eggs.  So there's no question that we're capable of this, but I don't think we yet have sufficient understanding of how consciousness arises to achieve it outside of the physical tools we were born with, and which natural selection has fine tuned to the task.

 

Hopefully this either helps to rectify any failures in our communication, or at least creates a few new ones for the pleasure of further discussion.  :)

Edited by DonAthos

Share this post


Link to post
Share on other sites

I have no questions about your character.

No; I didn't believe so. But I do recognize that the statement "I intend to write a program that'll stump Eiuol and Don Athos" could be interpreted in a few different ways.

Yet if you observe ongoing and historical attempts to satisfy the Turing test, I think you'll agree that people are trying to "fool" others.

Absolutely. This has given me no small amount of irritation, too; not with the programmers (who are, after all, chasing multimillion dollar prizes) but with the morons that couldn't distinguish Eliza from a real therapist.

Regardless, my position continues to be that satisfying some arbitrary threshold of convincing/fooling/stumping X number of people does not mean that the underlying premise is established -- here, that an actual consciousness has been created.

No. Sorry; I'd forgotten about that part.

However, I still think that if it walks like a duck and quacks like a duck -to the best of MY knowledge- then it's a duck. I think you do, too; we just disagree about what constitutes "walking" and "quacking".

If we sincerely wanted to reproduce consciousness, I think that there's a lot more that would have to go into it than mimicking its appearance or output.

See, this is where that stumping program would come in handy because that's the inductive point.

Is it possible to write a program that could perfectly mimick your personality?

One of your attributes that would be absolutely essential to any copy is that you learn from your experiences; every post you make draws from a slightly larger body of knowledge (in a sense, every post is from a slightly wiser man). To replicate that (somehow) would require writing a program that could learn from its own first-person experiences, like you do, but whose experiences would be different from yours.

So it's not actually possible to make a perfect mimicry of you, by definition; it's baked into the nature of consciousness.

The other aspects that come to mind are all somewhat technical but I could lay them out, if you'd like?

Share this post


Link to post
Share on other sites

Now, we know that the mind is what the brain does and that the brain is made out of neurons, each of which is a tiny clockwork mechanism. We have mathematically quantified exactly how they work and why ....

I truly hesitate to ask this, but you are being ironic, right?  If so, please have a moderator delete my post.

Edited by New Buddha

Share this post


Link to post
Share on other sites

"The only philosophical content in it is an identification (whether true or false) of relevance; that one can determine "consciousness" from nothing more than dialogue."

No, it isn't. The Turing Test is about imitating human intelligence through dialog in such a way that it appears intelligent. There's a reason that Turing movie is called "The Imitation Game". Nicky is right in his description. You're asking about the zombie question really, that is, whether or not any entity can behave like a human, without mental states. Not "similar to" a human, but the same way, so we'd never be able to tell if it is conscious.

More to the point though, focus on what it would mean to behave like a human.

"The primary thing about it is that if something walks like a person, talks like a person and holds philosophical discourse like a person then to say it is not, in fact, a person (against all evidence) is to say that it's a zombie."

You seem to be saying it is metaphysically possible to behave humanly i.e. conceptually and conscious-like without being conscious. My idea about development is that if an entity builds up its concepts, develops like a human does mentally, it is conscious. The zombie thought experiment is entirely focused on behavior, and implicitly ignores or denies that consciousness must be involved to behave like a human. I say it is metaphysically impossible to have a mindless mechanism act exactly as a mindful one. No matter how much you protest, the machine is conscious! The better question is, how can an entity possibly act conscious-like without being conscious, let alone like a human? All you need to look at is their development, and they'd need things like mental states to do that.
 

Share this post


Link to post
Share on other sites

I truly hesitate to ask this, but you are being ironic, right?  If so, please have a moderator delete my post.

No, he's obviously not being ironic. All he said was that the mind isn't mystical.

Share this post


Link to post
Share on other sites

I truly hesitate to ask this, but you are being ironic, right?

Not at all. The conclusion that brains can't be conscious wasn't meant seriously, of course, but it follows quite literally from the premise that consciousness is something completely incomparable to any *thing* that behaves predictably. If we say that computers can't be conscious, on the grounds that everything they do is a matter of perfectly predictable bit twiddling - so do neurons.

I meant it as an argument from absurdity.

Share this post


Link to post
Share on other sites

I say it is metaphysically impossible to have a mindless mechanism act exactly as a mindful one.

Precisely my point. :thumbsup: If it acts consciously, in every attribute we can think of, it must be conscious.

I think part of it is a severe underestimation of what could pass as "conscious" to a rational judge, too. Eliza is a program that's had mild success, for example, specifically because she's designed to lull her judges into talking about themselves (and forgetting all about her). You can even talk to Eliza for yourself and see whether you'd fall for it.

One could say that a program that could form associative and symbolic memories on the go, make inferences, form and alter its own goals, etc; one could say it's basically a souped-up Eliza, but something like that would in fact, in its guts, be at least several thousand times bigger and more complex than Eliza (optimistically).

So I agree with all your conclusions except the conclusion that I disagree.

Share this post


Link to post
Share on other sites

Honestly, I don't think chatbots themselves will ever become conscious simply because all a chatbot ever interacts with is a single input/output channel; I think that if you took a human brain and reduced it to that, it'd destroy them, which doesn't bode well for emulating one in that environment.

What seems much more likely to me would be something like what's portrayed in the movie Chappie.

The program in that movie is embedded in a robotic body (presumably teeming with robust I/O interactions) and has to "learn" new skills and information the way a person does (which means it's based on a Neural Network which is specifically designed to emulate a brain). At one point in the movie the programmer asks it "how could I have known that you would become - you" which actually seems like the one safe assumption to make about AI; not that it would somehow defy its programming (which isn't what people do, either) but that it'd have to be programmed dynamically.

Share this post


Link to post
Share on other sites

Not at all. The conclusion that brains can't be conscious wasn't meant seriously, of course, but it follows quite literally from the premise that consciousness is something completely incomparable to any *thing* that behaves predictably. If we say that computers can't be conscious, on the grounds that everything they do is a matter of perfectly predictable bit twiddling - so do neurons.

I meant it as an argument from absurdity.

Maybe you should work on being more overtly absurd.  :whistle:

 

A computer does what the programmer tells it to do, and in this sense, it is not "predictable".  It is merely carrying out instructions, in an algorithmic manner, as defined by it's programmer. You wouldn't describe what your hand-held calculator does as predictable would you?  Or an abacus?  A computer program is only as smart as it's programmer.

 

Above, you use the term "clockwork" which I take to mean the equivalent to "predictable".  Reasoning along the lines that, "If something is clockwork-like, then it is predictable.  And since QM is not predictable, consciousness may in some ways be governed by QM."

 

A great deal of what we do cognitively is sub-attentive, and is, therefore, not predictable - either to ourselves or to others.  And, how would I predict what I do, before I do it?  This is the self-reference paradox - and the concept of predictability can't escape this paradox.

 

One argument against Laplace's Demon is to imagine there are two Demon's.  Demon 'A' would need to take into account how it's prediction would impact Demon's 'B' prediction, which, in turn, would impacts Demon 'A's' prediction, ad infinitum....

 

Predictability is epistemic (and contextual), not ontological.  Un-predictability in QM is epistemic, not ontological.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.