Jump to content
Objectivism Online Forum

Zombies and Artificial Minds

Rate this topic


Recommended Posts

It is merely carrying out instructions...

That's an oversimplification.

You could instruct a computer to show a bitmap on the screen (say, for a 2D game) and it would carry out that instruction exactly as written. Or you could instruct it to check some piece of information and show one thing if it's one way or show another if it's different. Or you could tell it to check something and then fetch the bitmap to display FROM one place if it's one way, or from somewhere else if it's another, or to swap out the bitmaps that are stored in those places. If you're brave you can even instruct it to reach inside of its own code, change its own instructions and then run them (which is discouraged but REALLY cool).

By the time you've got a program that's complex enough to do something as simple as play chess it's checking, manipulating and rechecking so many variables, running through so many contingencies and making so many adjustments on-the-fly that while all of it can still be traced back to its instructions, to say that it's only performing its instructions (which brings to mind an actor with a predetermined script) is a misrepresentation; it omits the fact that the execution of most of those "instructions" depends on the information it has access to at runtime.

You wouldn't describe what your hand-held calculator does as predictable would you?

... Why not?

I'd say that calculators are incredibly predictable; that's actually what makes them so valuable.

This is the self-reference paradox - and the concept of predictability can't escape this paradox.

Yes, I think you're absolutely right.

Reasoning along the lines that, "If something is clockwork-like, then it is predictable. And since QM is not predictable, consciousness may in some ways be governed by QM."

That would be silly, wouldn't it? :whistle:

Edit:

That wouldn't be valid because if it were true, and ontological open-endedness in QM gave rise to a similar open-endedness in brains (despite all the assurances to the contrary we've heard from so many Quantum Physicists) then it would do the same for everything else and we would be living in a fundamentally arbitrary universe.

To be clear, I consequently consider that proposition itself to be absurd.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

@Harrison #6  (I gotta learn how to multi-quote....)

 

"it omits the fact that the execution of most of those "instructions" depends on the information it has access to at runtime."

 

But the same program, run with same information "at runtime", will always produce the same result.  Otherwise, what good is the program.

 

 

 

"I'd say that calculators are incredibly predictable; that's actually what makes them so valuable."

 

Predictability implies a factor of the "unknown".  Are you surprised to discover that each time you add 1+1 that it equals 2 ?

 

Add Edit:  Complexity is epistemic, not ontological.

Edited by New Buddha
Link to comment
Share on other sites

I gotta learn how to multi-quote.

I just copy it to the clipboard so that I can paste it as many times as I like, then chop each copy down to the relevant section.

But the same program, run with same information "at runtime", will always produce the same result.

Yes, unless it stores information between executions (the way "settings" are usually done).

I fail to see the significance, though.

Predictability implies a factor of the "unknown". Are you surprised to discover that each time you add 1+1 that it equals 2 ?

Are you suggesting an absolutely certain form of knowledge (a priori knowledge)?

Complexity is epistemic, not ontological.

I think it's a little of both; an ontological thing, as viewed a certain way.

Granted, I see scientists (particularly computer scientists) abusing the Hell out of "complexity" on a daily basis; calling it exclusively ontological, but it'd be a mistake to accept the flipside of it and say that it's completely epistemic, either.

Complexity is neither "in here" nor "out there": it's Objective.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

"Predictability implies a factor of the "unknown". Are you surprised to discover that each time you add 1+1 that it equals 2?" "Are you suggesting an absolutely certain form of knowledge (a priori knowledge)?" You two are both confusing me here. "Predictability" doesn't have any such "implication of the 'unknown'" as far as I've ever seen and is pretty much the opposite of surprising. One doesn't need "a prior knowledge" either for certainty, especially not for something like basic addition.

Link to comment
Share on other sites

"Predictability" doesn't have any such "implication of the 'unknown'" as far as I've ever seen and is pretty much the opposite of surprising.

 

Exactly.  If addition isn't "predictable" because there's no element of surprise at all then there must be some further state of knowledge; more predictable than predictable, so that the degree of certainty one has in it can't even be quantified in that way.

 

That's why I ask about a priori knowledge.  It's just a wild guess.

Link to comment
Share on other sites

My position is that a computer program (Structural Engineering, Meteorological Forecasting, Financial Investment, Word Processing, Artificial Intelligence,etc.) is merely following a script, and that the complexity of the program does not change this fact. Harrison say's that I'm oversimplifying things, and that the complexity of a program does somehow change things in a fundamental way.  At least this is what I understand him to be saying.

 

Some have argued that consciousness emerged once the mind was sufficiently "complex".  I say that this is an error stemming from believing that simple vs. complex is ontological, rather than an epistemic scale of measurement contextual to some defined unit of measure.  This argument has also been extended to Artificial Intelligence by some people.  That AI will be possible once sufficiently complex hardware/software is created. 

Link to comment
Share on other sites

As far as complexity goes with consciousness, I think a certain amount is necessary, but not sufficient. What we have right now in mechanical things aiming toward simulating conscious things is too simple to make a real consciousness. However, the question is if any of those things have the right basic approach such that expanding upon them enough could ever succeed in making a real consciousness.

Link to comment
Share on other sites

Bluecherry:

 

Luckily we always have a steady supply of complex systems which exhibit consciousness which we can observe in terms of both structure and function so that one day we can unravel the puzzle of how, under what conditions, what structures, what arrangements and organizations of matter (and energy), what contexts are needed for consciousness... the problem is the need for the technology (as yet undeveloped) sufficient to observe it in enough detail and completeness...without harming a working version of the complex system.

Edited by StrictlyLogical
Link to comment
Share on other sites

HD:

 

Since you ARE an absolute determinist, from electron, to shoes, to ceiling wax, and ship captains, you are in complete agreement with the idea consciousness is exactly commensurable with a clockwork because everything IS a clockwork to you?  Correct?

 

 

As for the Zombie of the original post, I can't help thinking that putting forth the "logical" possibility (whatever rationalistic or Kantian nonsense that is) of a Zombie so called "indistinguishable" from a person (impliedly physically) and yet not being conscious, proves physicality is false, is a perfect example of the fallacy of "begging the question".  Perhaps I am missing something but I think the entire idea of "logical possibility" is likely the root error.

Edited by StrictlyLogical
Link to comment
Share on other sites

Since you ARE an absolute determinist, from electron, to shoes, to ceiling wax, and ship captains, you are in complete agreement with the idea consciousness is exactly commensurable with a clockwork because everything IS a clockwork to you? Correct?

In a nutshell. I do think there is such a thing as "choice" (and meaningfully) but not that it actually implies metaphysical alternatives.

As for the zombie argument, it absolutely is question begging. What I find more interesting, though, is that it's also an explicit endorsement of Kant's phenomal-noumenal distinction; that what metaphysically is isn't really related to what we see (or, at best, is only related in some loose way).

The same has been applied to metaphysical alternatives (since we're directly aware of them) and there may be something to that; I don't know yet.

I have determined that it'll take a long and dedicated study to sort that out properly, which really should be the perogative of the professional philosophers (who don't have, like, day jobs). However, whenever we do figure that out properly it'll open up whole new fields of inquiry; comparable to the ITOE.

My primary point, for now, was about how the zombie argument relates to the Turing test.

Harrison say's that I'm oversimplifying things, and that the complexity of a program does somehow change things in a fundamental way.

Exactly. Edited by Harrison Danneskjold
Link to comment
Share on other sites

Here's the thing:

If a brain's ability to *do* conscious awareness isn't a matter of arrangement (which means complexity) then it's a matter of substance.

If it's a matter of the *stuff* brains are made out of then there must be some fundamental kind of stuff that thinks (an atom of consciousness).

You'll never find one.

Link to comment
Share on other sites

Exactly.

So, if you build a complex enough one of these, it would be Intelligent.

 

Links for Geeks.  1, 2

 

From Link 1 above:

 

"When approaching visitors in the Analog Computers gallery, I usually ask them about their background in computing. I have found that people who are professional software developers have a more difficult time understanding the function, application, and relevance of analog computation. Programmers are taught to think about how to express algorithms in instructions, which is simply not how analog computers work. But when I talked to a physicist or an industrial mechanic, I discovered that they used approaches in their training or daily work that make the operation of analog computers very natural to them."

Edited by New Buddha
Link to comment
Share on other sites

Here's the thing:

If a brain's ability to *do* conscious awareness isn't a matter of arrangement (which means complexity) then it's a matter of substance.

If it's a matter of the *stuff* brains are made out of then there must be some fundamental kind of stuff that thinks (an atom of consciousness).

You'll never find one.

There is an alternate approach to understanding artificial intelligence.  Here are a few links:

 

Founder of iRobot and the Roomba vacuum, Rodney Brooks.

 

Mark Tilden, one of the pioneers of analog  robotics.

 

Do I think that this approach will produce something indistinguishable from a human?  No. But the two above "Links for Geeks" in Post #37 are much closer to operating as as a human mind does than a Universal Turing Machine.

 

Two quotes from Rodney Brooks on The Edge:

 

"But it is not always helpful to confuse computational approximations with computational theories of a natural phenomenon. For instance, consider a classical model of a single planet orbiting a sun. There is a gravitational model, and the behavior of the two bodies can easily be explained as the solution to a simple differential equation, describing forces and accelerations, and their relationships. The equations can be extended for relativity and for multiple planets and instantaneously those equations describe what a physicist would say is happening in the system. Unfortunately the equations become insoluble by this point, and the best we can do to understand the long-term behavior of the system is to use computation, where time is cut into slices and a digital approximation to the continuous description of the local behaviors is used to run a long term simulation."

 

"The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist."

 

Regarding the Tilden videos, there is NO SOFTWARE running the robots, i.e. NO ALGORITHMS.

Edited by New Buddha
Link to comment
Share on other sites

Maybe the computation analogy isn't really helpful. I'm open to hearing alternatives.

Regarding the Tilden videos, there is NO SOFTWARE running the robots, i.e. NO ALGORITHMS.

There's no such a thing as software in any other machine, except as certain arrangements of hardware, anyway. If it's all hardwired in then it has exactly one program (the only one it'll ever have).

Link to comment
Share on other sites

This from Jeff Hawkins book, On Intelligence:

 

There is a largely ignored problem with this [computational] brain-as-computer analogy.  Neurons are quite slow compared to the transistors in a computer.  A neuron collects input from its synapses, and combines these inputs together to decide when to output a spike to other neurons.  A typical neuron can do this and reset itself in about five milliseconds (5 ms), or around two hundred times per second.  This may seem fast, but a modern silicon-based computer can do one billion operations in a second.  This means a basic computer operation is five million times faster than the basic operation in your brain....

 

....I always felt this argument [computation] was a fallacy, and a simple thought experiment shows why.  It is called the "one hundred-step rule."  A human can perform significant tasks in much less time than a second.  For example, I could show you a photograph and ask you to determine if there is a cat in the image.  Your job would be to push a button if there is a cat, but not if you see a bear or a warthog or a turnip.  This task is difficult or impossible for a computer to perform today, yet a human can do it reliably in half a second or less.  But neurons are slow, so in that half a second the information entering your brain can only traverse a chain of one hundred neurons long.  That is, the brain "computes" solutions to problems like this one in one hundred steps or fewer, regardless of how many total neurons might be involved.

 

 

The brain is not computational.  It does not have a program processing information (i.e. sensation) as it is incoming.  Rather, the sensor organs (hardware) are attuned to the incoming information (sound waves, light wave, taste molecules, etc.) and only certain neurons will fire in the presence of certain sensory input.  This is know as Labeled Line Theory.

 

The retina itself, not the visual cortex, finds "edges" via lateral inhibition and, in doing so, seemingly "forms" objects.

 

For a computer to "find" edges, it requires a program - and that program is based on some abstract, statistical algorithm that the programmer feels will accomplish the job.

 

This is, in part, why a brain consumes the equivalent of a 14 w light bulb and the computer I'm working on has a 750 w power supply.

 

The transduction of incoming sensory information from one form of energy (mechanical, electromechanical) to bioelectric energy is as automatic as a microphone that records music and stores it on magnetic tape.  For those of us old enough to remember, this is called an analog recording.

 

The sensory information in the brain is not "kind of" like what's "out there" according to a understanding of a computer programmer and his ability to write code.  The sensory information in the brain is an Analog of what's out there.

 

Edit Note:  Made some edits clarifications.

Edited by New Buddha
Link to comment
Share on other sites

The brain is not computational. It does not have a program processing information (i.e. sensation) as it is incoming. Rather, the sensor organs (hardware) are attuned to the incoming information (sound waves, light wave, taste molecules, etc.) and only certain neurons will fire in the presence of certain sensory input. This is know as https://www.youtube.com/watch?v=cQBztCcbvRITheory.

The retina itself, not the visual cortex, finds "edges" via lateral inhibition and, in doing so, seemingly "forms" objects.

For a computer to "find" edges, it requires a program - and that program is based on some abstract, statistical algorithm that the programmer feels will accomplish the job.

Yeah, but since an 'algorithm' is just a series of actions the brain does sort of run an algorithm; it's built into the neurons, themselves. The robots you mentioned run the program that's built into their hardware. A grandfather clock runs a program which consists of interacting gears and springs and pendulums. A sled runs the program dictated by gravity and friction. Masses run the program of Universal Gravitation.

The concept of a "program" (or "algorithm") is so loose that you can use it to refer to literally anything that happens.

That's another problem with talking about computers obeying 'instructions': the logical extension is that an engineer "instructs" his robot to do something by arranging its hardware a certain way, or that gravity "instructs" solid objects to fall. Reasoning along lines like that, you absolutely will introduce all sorts of teleological implications about any *thing* that acts in any way whatsoever.

What's useful about the Turing test is that it's all about "input" and "output":

What does it do (OUT) under these conditions (IN)?

How can you possibly imagine that you can know what I believe about consciousness? Well, you know that words refer to ideas and that these words didn't -poof!- themselves out of thin air; that someone had to string them together in order to convey some meaning (which consists of certain beliefs about consciousness) and therefore that the speaker probably believes the content: that I believe what I've been saying. You infer things about my consciousness from my observable actions.

Why would we use any other standard for making inferences about any nonhuman consciousness?

---

I suspect there really is something off about the computing metaphor (and I suspect it has something to do with infinitely elastic computing ideas) but in defending AI and the Turing test, here, what I'm primarily defending is the epistemological principle by which any inferences about any consciousnesses should be made.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

When I use the term algorithm, I do so in an unconventional way - arguably in a way that is as unconventional as Objectivist epistemology from Rationalist or Materialist epistemology.

 

An algorithm, at it's most basic is this:

 

Which problem is easier to solve?

 

onemillionsixhundredthousandfivehundredseventythreeplussixtyfourthousandminusthreehundredequals?

 

or

 

1,600,573 + 64,000 - 300 =

 

or 1,600,573

+       64,000

---------------

=

-300

---------------

=

?

 

An algorithm is a way of breaking down a big problem into a small problem - such that we can solve it perceptually.  I'm assuming that the 3rd example is the easiest.  Observe, via introspection, how you solve it, and you can probably see what I'm getting at.

 

A separate issue is the limits of modeling, which Rodney Brooks touched on in the Post #38.

 

I don't know if you read Harriman's The Logical Leap, but Harriman, IMHO, was skirting a lot of very important philosophical issues regarding algorithms and computational modeling.

 

 

 

 

This is from Wiki regarding the 2, 3 and N-body gravitational problem, a la Newton's orbital mechanics, which played a very prominent role in Harriman's book:

 

The Sun wobbles as it rotates around the galactic center, dragging the Solar System and Earth along with it. What mathematician Kepler did in arriving at his three famous equations was curve-fit the apparent motions of the planets using Tycho Brahe's data, and not curve-fitting their true circular motions about the Sun (see Figure). Both Robert Hooke and Newton were well aware that Newton's Law of Universal Gravitation did not hold for the forces associated with elliptical orbits.[10] In fact, Newton's Universal Law does not account for the orbit of Mercury, the asteroid belt's gravitational behavior, or Saturn's rings.[24] Newton stated (in the 11th Section of the Principia) that the main reason, however, for failing to predict the forces for elliptical orbits was that his math model was for a body confined to a situation that hardly existed in the real world, namely, the motions of bodies attracted toward an unmoving center. Some present physics and astronomy textbooks don't emphasize the negative significance of Newton's assumption and end up teaching that his math model is in effect reality. It is to be understood that the classical two-body problem solution above is a mathematical idealization. See also Kepler's first law of planetary motion.

Some modern writers have criticized Newton's fixed Sun as emblematic of a school of reductive thought - see Truesdell's Essays in the History of Mechanics referenced below. An aside: Newtonian physics doesn't include (among other things) relative motion and may be the root of the reason Newton "fixed" the Sun.[25][26]  

 

These very same limitations that apply to mechanics apply to any model (which includes AI programs that "model" human consciousness).  And Einstein did not escape these limitations any more than Newton, and Einsteins' successor won't either.

 

The epistemic nature of Modeling is a much bigger topic, and definitely should be it's own post, and I hope to do some one day.

 

 

Add Edit:  And while this should in no way influence what you believe in an "Argument from Authority" way, Jeff Hawkins and Rodney Brooks are very rich and very successful business men who put their money where there mouths are.  I would not be so dismissive of them. 

 

Add Edit:  When I am space-planning a very large office layout with waiting areas, kitchenettes, conference rooms, multiple work stations, offices that get windows, offices that don't get windows, etc., I do so via sketches.  Sketches are algorithmic means of breaking down a very large problem into a perceptually manageable problem so that I can solve it.

Edited by New Buddha
Link to comment
Share on other sites

These very same limitations that apply to mechanics apply to any model (which includes AI programs that "model" human consciousness).

Of course. In fact, the -gravity- of the issue goes much further than you've suggested.

I suspect that everything we know may be an approximation (or a model or a "lie we tell to children") which may or may not correspond perfectly to reality; perhaps even on the perceptual level (which would account for the 'mistaken perception' magicians attempt to induce). I mentioned this here but I haven't found the time to sort it out properly.

And while this should in no way influence what you believe in an "Argument from Authority" way, Jeff Hawkins and Rodney Brooks are very rich and very successful business men who put their money where there mouths are. I would not be so dismissive of them.

I don't mean to be dismissive of anybody. I've used up my internet allowance for the month and consequently been reduced to a crawl. OO, Google and Wikipedia work just fine, if I'm patient, but I won't be able to watch the videos you posted (or even load the website about the cortically-inspired program) until the 15th.

I have managed to get a copy of the Logical Leap, though. :thumbsup:

Link to comment
Share on other sites

Of course. In fact, the -gravity- of the issue goes much further than you've suggested.

I suspect that everything we know may be an approximation (or a model or a "lie we tell to children") which may or may not correspond perfectly to reality; perhaps even on the perceptual level (which would account for the 'mistaken perception' magicians attempt to induce). I mentioned this here but I haven't found the time to sort it out properly.

Models are tools, nothing more.  And there is more than one way to solve a problem.  The Greeks built the Parthenon, but hadn't the slightest understanding of how to mathemtize physics.  Structural Engineering models are constantly being invented.  This will be the case 10,000 years from now.

 

If a noise wakes you up in the middle of the night, and you decide to go down stairs and investigate, and all you have available to you for protection is a golf club or a baseball bat -- do you not use one of them because they are not ONTOLOGICALLY a weapon?

 

Edit:  I hope that you can distinguish the difference between a visual neuron, with a certain impedance, responding to only a certain wave length of light, and a computer programmer modeling this interaction with pi being an irrational number and rounding errors, etc.

 

Edit 2:  The thing about Architecture is that you learn that plus or minus a few inches is usually good enough.  Precision for the sake of precision is the hallmark of a mediocre mind.

Edited by New Buddha
Link to comment
Share on other sites

If a noise wakes you up in the middle of the night, and you decide to go down stairs and investigate, and all you have available to you for protection is a golf club or a baseball bat -- do you not use one of them because they are not ONTOLOGICALLY a weapon?

Wikipedia defines Ontology as "the philosophical study of the nature of being, becoming, existence, or reality, as well as the basic categories of being and their relations".

What is an ontological weapon?

I hope that you can distinguish between a visual neuron, with a certain impedance, responding to only a certain wave length of light, and a computer programmer modeling this interaction with pi being an irrational number and rounding errors, etc.

What's the significance of that statement?

Are you sneering at the computational metaphor? Or the distinction between the map and the territory? Are you highlighting how difficult it is to get a computer to model continuous entities?

What's it for?

And then there's this:

Precision for the sake of precision is the hallmark of a mediocre mind.

I'm sorely tempted to make some sort of snide remark here, but I suspect that you posted it without thinking about it.

I'll just say that mental precision is important, for its own sake. Some of us have truly struggled to gain and to keep it. If you haven't then you should relish your gift but it doesn't diminish the importance of being precise.

And when you spit on precision-as-such, those who have struggled for it (like me) might find it grossly insulting.

Moving on.

Link to comment
Share on other sites

Harrison, the Precision remark applied to Models only - it has nothing to do with you.  It means that when Modeling a system, the level of precision that the modeler/engineer/scientist, etc. chooses  is related to goal that you are trying to accomplish.  Nothing more.  I thought that's what the link to your prior post on Mathematics was addressing.

Link to comment
Share on other sites

  • 7 months later...
On 8/4/2015 at 9:06 PM, New Buddha said:

A computer ... merely [carries] out instructions, in an algorithmic manner, as defined by it's programmer.

 

The program AlphaGo recently beat the best human player on Earth.

 

It was implemented as an artificial Neural Network; meaning that its "instructions" did not consist of 'do X' nor even 'if Y then do X' but the mathematical representations of neural activity (which involve a lot of calculus and other sorts of witchcraft).

This neural net, which started out as an atrocious Go player, was given extensive training (in which it modified its own structure in a lengthy process of trials and errors) until it was demonstrated to be the best Go-playing-thing on this planet.

 

This is an accessible explanation (for any and all laymen) as to why your assessment is an oversimplification.

Link to comment
Share on other sites

Deep Blue (the first computer to beat the best Chess-player in the world), if reconfigured to play Go, would not be able to do so; it would be overwhelmed by the number of possible actions (which Go allows for orders of magnitude more of than Chess) and freeze. Analogously, if AlphaGo was reconfigured to mimick human conversation (like Eliza), it would also freeze under the sheer number of possibilities.

 

Human language allows for an infinite number of possible utterances (even excluding gibberish). Furthermore, each statement refers to at least two *things* which one must know something about, in order to say anything meaningful.

 

A computer which could only recite prescripted statements (like every existing chatbot) would eventually repeat itself enough to demonstrate the nature of its performance. Adding more responses could prolong the illusion but not indefinitely; no chatbot can pass as a human forever, by the very nature of their design.

A computer which understood the English language and could generate its own statements, but had no 'awareness' of anything beyond the input and output of text, would demonstrate that very lack of awareness through syntactically valid nonsense like "green dreams sleep furiously". It could make novel and unique statements (like us and unlinke chatbots) but it would remain incapable of saying anything meaningful.

 

An algorithm that could formulate a post like this, by the same methods I've used to formulate it, would necessarily have to be self-aware. Nothing less would work.

That is the point of the Turing Test.

Edited by Harrison Danneskjold
Conclusion
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...