Jump to content
Objectivism Online Forum

Ethical trap: robot paralyzed by choice of who to save

Rate this topic


dream_weaver

Recommended Posts

Ethical trap: robot paralyzed by choice of who to save

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm.

 

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

 

Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, "my answer is: I have no idea".

 

The reasoning behind its actions is the programmed code of conduct. For a robot to perform the process of understanding, the programmed code would need a subroutine for understanding, i.e., it would need an epistemological subroutine.

Link to comment
Share on other sites

It's wrong to use the word moral/ethical here, because ethics presupposes free will. Free will is not a faulty computer program that doesn't handle its input properly (and that's what this is, I've written plenty of them to recognize one when I see it).

All he needs to do is create a better criteria for the robot to base its choice on (on when to try and save one person (whichever is easiest, or whatever other criteria he wishes to use), and when to go for both.

Edited by Nicky
Link to comment
Share on other sites

Let's not forget that ethics presupposes a living being, with consciousness and volition, purpose, and awareness of the consequences.  So the robot is out on all counts.  Ascribing "choice' to the robot's actions as described above is absurd.  

 

Edited by A is A
Link to comment
Share on other sites

The "decision" being "made" by the robot is just an extension of the programmer's rules.  If I set my clock radio to wake me up at 7:00 am, does it "chose" to do so?  A RNG or as Nicky say's, faulty programming, does not equal intelligence.  Even an epistemological subroutine would not cause something to be conscious or make decisions.

 

Intelligence is not algorithmic.

Link to comment
Share on other sites

Good points, all. Yet, we live in a world where all these metaphorical usages of language are heard and read about on almost a daily basis.

"Hang on for a second. The computer is still thinking about the answer."
"The dog gone computer picked a heck of a time to decide to go on break."

Or the cited article, which appears on a website offering all sorts of insights considered by many to be scientific.

 

When these facets are pointed out, they are often countered or schluffed off with something along the lines that you are being too literal. Pointing something like this out becomes little more than separating the wheat from the chaff.

Edited by dream_weaver
Link to comment
Share on other sites

Intelligence is not algorithmic.

What is it, then? I'm not sure if you're saying intelligence itself is absent any method at the very bottom (i.e. there is no innate computational or algorithmic method anywhere), or if you just mean to say intelligence is more than merely deductive methods.

Edited by Eiuol
Link to comment
Share on other sites

What is it, then? I'm not sure if you're saying intelligence itself is absent any method at the very bottom (i.e. there is no innate computational or algorithmic method anywhere), or if you just mean to say intelligence is more than merely deductive methods.

I know that you are aware of Rand's use of the "crow epistemology" example.

 

We use logic, computer code, oscilloscopes, algorithms, math, syllogisms, abstractions, rulers, scales, telescopes, calculators, words, sentences, propositions, statistical mechanics, symbolic logic, etc. to reduce complex things down to a perceptible level at which we are able to apprehend them.

 

It's an error to mistake the means for the end.

 

Our hunter/gather ancestors lived for tens of thousands of years without any of the above.

Link to comment
Share on other sites

It could be argued that the use of metaphor is a hallmark of intelligence.

The metaphor regarding the 'ethical trap' is maintained by analogy to the human proxies that the robot is 'acting'  to save from their demise. While it may help to understand the article as it is written, it does so at the expense of precision of thought. Winfield conveniently admits transitioning from thinking it was not possible for a robot to make ethical choices for itself, to having no idea, today. This is clearly not a transition to greater precision and clarity on the matter.

Link to comment
Share on other sites

Good points, all. Yet, we live in a world where all these metaphorical usages of language are heard and read about on almost a daily basis.

"Hang on for a second. The computer is still thinking about the answer."

"The dog gone computer picked a heck of a time to decide to go on break."

Or the cited article, which appears on a website offering all sorts of insights considered by many to be scientific.

 

When these facets are pointed out, they are often countered or schluffed off with something along the lines that you are being too literal. Pointing something like this out becomes little more than separating the wheat from the chaff.

Well, if you want to discuss philosophy, you'd better use concepts with precise meaning or you'll wind up totally confused.

Link to comment
Share on other sites

Let's not forget that ethics presupposes a living being, with consciousness and volition, purpose, and awareness of the consequences.

All true, as long as you allow for the possibility of artificial life forms who fit that description.

So the robot is out on all counts. Ascribing "choice' to the robot's actions as described above is absurd.

True again, as long as you mean this specific robot, not all potential forms of artificial intelligence.
Link to comment
Share on other sites

Intelligence is not algorithmic.

What is it, then?

I know that you are aware of Rand's use of the "crow epistemology" example.

 

We use logic, computer code, oscilloscopes, algorithms, math, syllogisms, abstractions, rulers, scales, telescopes, calculators, words, sentences, propositions, statistical mechanics, symbolic logic, etc. to reduce complex things down to a perceptible level at which we are able to apprehend them.

 

It's an error to mistake the means for the end.

 

Our hunter/gather ancestors lived for tens of thousands of years without any of the above.

I'm curious to hear you answer to Louie's question as well.

It could be argued that the use of metaphor is a hallmark of intelligence.

Are you arguing that? Or do you agree with Ayn Rand is that conceptualization (of a certain complexity) is the defining characteristic of intelligence?

And are you arguing that conceptualization cannot be achieved programatically?

Edited by Nicky
Link to comment
Share on other sites

I'm curious to hear you answer to Louie's question as well.

Are you arguing that? Or do you agree with Ayn Rand is that conceptualization (of a certain complexity) is the defining characteristic of intelligence?

And are you arguing that conceptualization cannot be achieved programatically?

Bear with me, because these are ideas I'm working with....  I've been giving a good deal of thought lately to traditional mathematical foundationalism, and how it can relate to known perceptual  limits, and it's relationship to Objectivist Epistemology.

 

If I presented you with the following:  6872265698, you would have trouble "grasping it".

 

But if I broke it down - algorithmically - as follows:  (687) 226-5698 you could easily remember it.  Because instead of trying to remember 10 discrete "things" you now only need to remember a "phone number" which is always composed of "three" things.  (And if you live in Area Code 687 and frequently placed calls, it will be even easier.)

 

Louie posited that an "epistemic subroutine" would be need to be written to achieve understanding.  However, computer code is just a formal set of rules for symbolic manipulation that exist independent of what is actually being manipulated.  If the number above had been:  (564) 982-6223 you would just as easily been able to memorize it by employing the same method (algorithm).  The algorithmic rule of breaking down large numbers into smaller sets of numbers exists independent of the actual numbers:  a+b=c is true for 1+2=3 as well as 1,448+16,798=18,246.  The algorithm is a METHOD of breaking down information to a perceptible level at which we can apprehend and manipulate it.  It is not knowledge itself.

Link to comment
Share on other sites

The algorithm is a METHOD of breaking down information to a perceptible level at which we can apprehend and manipulate it.  It is not knowledge itself.

Sure, but not all methods employed are conscious like that. Intelligence requires some innate mechanisms separate from consciously developed methods, otherwise we'd be asking how the earliest methods were developed without innate methods. As you said, methods aren't knowledge itself, so it's no issue to suppose an innate mechanisms exist. This robot is at least an example of pushing pseudo-moral reasoning as far as possible before conceptualization is needed for real moral reasoning.

Link to comment
Share on other sites

Innate perceptual mechanisms DO exists. Our inability to perceive more than three things (and the need to employ concepts i.e. 4, 5, 6, 7, 1,000, Infinity, Zero, etc.) is determined (good or bad) by our biology.  If an alien from Alpha Centauri were capable of automatically "grasping" that 567,896 + 875,435 x 45/3 = 481,095.33333.... - without resorting to mathematical algorithms - then his mathematics would be entirely different from ours.  Pi would not need to equal 3.1415962......  He might have no concept of irrational numbers.

 

Also, we state that a^2 + b^2 = c^2.  But supposed that a jellyfish-like, intelligent creature existed in total aquatic darkness in the ice seas of Ganymede, and had no eyes to form or even conceive of triangles -- he would have no need, or ability, to conceive of straight lines, triangles, degrees or the idea of Pythagoras's  Theorem .  Our math and process of differentiation and integration (i.e. concept formation) is determined and set by our percept limitations.  Our concepts are not Ontological, they are Epistemological.

 

We cannot write algorithms (i.e. computer code) that somehow magically becomes sentient.

Link to comment
Share on other sites

Bear with me, because these are ideas I'm working with....  I've been giving a good deal of thought lately to traditional mathematical foundationalism, and how it can relate to known perceptual  limits, and it's relationship to Objectivist Epistemology.

 

If I presented you with the following:  6872265698, you would have trouble "grasping it".

 

But if I broke it down - algorithmically - as follows:  (687) 226-5698 you could easily remember it.  Because instead of trying to remember 10 discrete "things" you now only need to remember a "phone number" which is always composed of "three" things.  (And if you live in Area Code 687 and frequently placed calls, it will be even easier.)

 

Louie posited that an "epistemic subroutine" would be need to be written to achieve understanding.  However, computer code is just a formal set of rules for symbolic manipulation that exist independent of what is actually being manipulated.  If the number above had been:  (564) 982-6223 you would just as easily been able to memorize it by employing the same method (algorithm).  The algorithmic rule of breaking down large numbers into smaller sets of numbers exists independent of the actual numbers:  a+b=c is true for 1+2=3 as well as 1,448+16,798=18,246.  The algorithm is a METHOD of breaking down information to a perceptible level at which we can apprehend and manipulate it.  It is not knowledge itself.

The question I'd like the answer to is "What IS intelligence?", not what it isn't.

Link to comment
Share on other sites

We cannot write algorithms (i.e. computer code) that somehow magically becomes sentient.

Why not? What makes sentient beings magical? What makes it so we would need magic, rather than a better understanding of what they are, to re-create their functioning?

It's one thing to say "man will never become God". There is no such thing as God. We've never seen one, it's just an arbitrary concept. But we've seen plenty of intelligent life forms. We know they exist. There is nothing arbitrary about the suggestion that, if we try hard enough, we should be able to make one (out of different materials than we're currently making them out of :) ). It's absurd to suggest that we couldn't, that suggestion implies that the human mind is a supernatural entity.

Edited by Nicky
Link to comment
Share on other sites

Nicky,

No where in any post do I suggest that the brain operates by magic or is a supernatural entity.  That's exactly what I'm arguing against.  I'm will dismiss completely the idea that the computer architecture currently employed will every create AI.    Jeff Hawkins, the founder of Palm Pilot, Trio, etc. has been exploring a sort of cortical algorithmic code that can "learn" and he is also in the process of creating hardware that will support this approach.  You can learn more about it at NuPIC .  You might want to download his white paper if your interested in this kind of thing.

Link to comment
Share on other sites

Watched through the musical playback and anomaly detection. Where is this kind of stuff on NewScientist and ScienceDaily?

 

I have to ask, is Jeff Hawkins and company building a new type of computer, or working on a mechanical replication of a brain?

 

Still, the output is only relevant to the human observer in as far as I can tell. The input, supplemental programs and the layered cortical synthesizer are still operating in accordance with the laws of physics.

Link to comment
Share on other sites

1) New Scientist and Science Daily are "after the fact" newspapers.  Use Google Scholar to explore where current thought is headed.

 

2) Hawkins is not building a new type of computer that is "replicating" the brain.  He believes that he understands the structure of the cortex (which is only a SMALL part of the brain) and is trying to emulate it by storing Information over TIME.  This is something novel, and something which current computer architecture/programming is either not capable of, or not interested in doing -- but until he creates the appropriate hardware, he can only create a poor emulation of it.  I happen to believe that he is headed down a cul-de-sac, and think that the brain structures itself around the tuned frequencies of the input gathered by the senses.  The link sends you to a paper that doesn't exactly address my ideas, but does seem to support it.

 

3) Not sure what you mean that the "layered cortical synthesizer are still operating in accordance with the laws of physics."  Surely you are not suggesting that the mind operates outside the laws of physics?  But you are right, Hawkins in NO WAY believes that he can create a human mind, either via hardware or software.  I sorta punted on my reply to Nicky, because I just didn't want to get into explaining the impossibility to him.

 

Hawkins has multiple YouTube videos if you are interested.

Link to comment
Share on other sites

All he needs to do is create a better criteria for the robot to base its choice on (on when to try and save one person (whichever is easiest, or whatever other criteria he wishes to use), and when to go for both.

 

Well, I doubt whether the Three Laws can ever really be made to work properly because they're deontological. 

 

When two people are about to die, and either could be saved with equal effort but one must be chosen, we could give it some method of weighing its options (age, health, moral character, et cetera) but we could never really give it the 'right' answer because there's ultimately no reason for the commandment, itself.

 

Some pseudo-random solution would probably be the closest we could get it to behaving properly.

 

We cannot write algorithms (i.e. computer code) that somehow magically becomes sentient.

 

Do you believe that you are quantifiable?  Not the number of fingers you possess or the wavelengths of light you can see; YOU.

Link to comment
Share on other sites

Nicky,

No where in any post do I suggest that the brain operates by magic or is a supernatural entity.  That's exactly what I'm arguing against.  

Yes, you did. You said that "We cannot write algorithms (i.e. computer code) that somehow magically becomes sentient." The logical conclusion of that statement is that magic is a requirement for sentience.

 

That's an obviously false conclusion, thus making your argument null and void. You're welcome to rephrase it, so that no obviously false conclusions can be drawn from it, and then I'll try and prove it wrong in another way. But, as it was written, my rebuttal was quite good I thought.

 

To continue my parallel between AI and God, it is true that rational people often use the adjective "magical" to mock religious beliefs. And correctly so, because the object of those beliefs IS magical. Sentience isn't. Mocking the desire to re-create it by using the word "magical" is not rational.

Edited by Nicky
Link to comment
Share on other sites

@Nicky #23

 

I apologize.  I didn't realize you were employing an argument along the lines of the Law of Excluded Middle to refute what you perceived to be the illogic of my position.  Let me think about the issue a little more and see if I can't clarify - for both you and me! - what my position is.

Edited by New Buddha
Link to comment
Share on other sites

1) New Scientist and Science Daily are "after the fact" newspapers.  Use Google Scholar to explore where current thought is headed.

 

2) Hawkins is not building a new type of computer that is "replicating" the brain.  He believes that he understands the structure of the cortex (which is only a SMALL part of the brain) and is trying to emulate it by storing Information over TIME.  This is something novel, and something which current computer architecture/programming is either not capable of, or not interested in doing -- but until he creates the appropriate hardware, he can only create a poor emulation of it.  I happen to believe that he is headed down a cul-de-sac, and think that the brain structures itself around the tuned frequencies of the input gathered by the senses.  The link sends you to a paper that doesn't exactly address my ideas, but does seem to support it.

 

3) Not sure what you mean that the "layered cortical synthesizer are still operating in accordance with the laws of physics."  Surely you are not suggesting that the mind operates outside the laws of physics?  But you are right, Hawkins in NO WAY believes that he can create a human mind, either via hardware or software.  I sorta punted on my reply to Nicky, because I just didn't want to get into explaining the impossibility to him.

 

Hawkins has multiple YouTube videos if you are interested.

Google Scholar - noted.  

 

To my layman mind, Hawkins references what the neurons of the brain, the layers in the brain and then what he is trying to do with the software and hardware - the similarities are striking to me.

 

No, I'm not suggesting that the mind operates outside the laws of physics. The musical "learning" - vs. recording on my computer from a piano via a midi jack - provided playbacks after repetitions that continued to closer resemble the repeated musical piece. The pattern recognition software in conjunction with the hardware is obviously not recording like a computer connected to a piano with a midi interface. A Mozart story relates him having listened to a musical piece and later able to replicate it. All are operating according to physical law. Mozart was conscious of the musical piece while it was played the first time. Hawkins' machine is a product of consciousness.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...