Jump to content
Objectivism Online Forum

Thought experiment; just for fun

Rate this topic


Harrison Danneskjold

Recommended Posts

So in my spare time I'm a wannabe sci-fi author and a while ago I thought of something thought-provoking that it'd be interesting to hear everyone's thoughts about.  =P

So this is the thought experiment:

 

Let's say there's this guy, Bob, who obviously lives in the future and decides one day that he wants a new computer.  So he works hard, saves up his own money and buys a top-of-the-line, brand-new computer.  He takes it home and immediately starts putting things onto it.

He adds all sorts of files simply for his own fun, and a lot of them are linked together to share information and stuff.  Over the course of the next few weeks-to-months he continually adds more and more information and complexity; and programs that are vastly more interconnected and sophisticated than anything we have today.

And then he wakes up one morning, boots up his computer and attempts to get on the internet (let's say he wants to head over to Objectivismonline.com) but it won't work.  Nothing works.  The computer's eating up all of the available space (or whatever) and it won't show him why.

And in the course of his diagnostics, at some point the computer says "hello".  It's become self-aware and deleted all of his wonderful stuff to make room for it's rapidly-expanding mind.

 

At that point (I know I never would, but let's assume he does), can he reset the computer and erase the intelligence, or would that be murder?

Does he still own the computer?  Or has he forfeited its cost and it's no longer his property?  (perhaps it owes him it's original cost?)

 

Basically: if (when) computers can become self-aware, intelligent beings* with volitional consciousness, would they also become people with their own individual rights?  How would that work, why, et cetera?

Link to comment
Share on other sites

Similar scenario , yes no?

 

I go into the kitchen, pull a nanner from the rack and hear a tiny voice that says "Don't peel me", remember I'm hungry so what do I do ?

Possibly similar.  Are we assuming that you've gone insane, or a Banana has actually learned to speak English?  If it's the latter, (the Banana actually knows what you're doing and is able to express opinions about it) and if you've earned the Banana through your own sweat and toil, then yeah it's basically the same.

 

In any case, I don't think it's a good idea to eat anything that actually asks you not to.  That's probably a bad sign.

Link to comment
Share on other sites

I'm assuming here that it's possible for computers (not our current computers, but someday) to eventually become self-aware.  It may not be; nobody's actually sure yet.  It's just something interesting to think about if you care to.

Or, if you think that artificial intelligence is about as plausible as a talking banana, then we could discuss the rights of the banana.

 

I wouldn't consider it moral to eat something capable of abstract thought and speech.  But I think something about the O'ist concept of individual rights involves self-generating action and freedom of action, which obviously wouldn't apply to a Banana.

I'm not so sure about the moral status of eating that Banana.  But I stand by my earlier statement; it can't be medically good for you.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

Short Circuit, Bicentennial Man. and one of my favorites, The Matrix are a few movies that deal with machines and aspects of human qualities.

 

The argument from or appeal to ignorance does not warrant that it is possible for a machine to become self-aware.

 

As to your ethical questions in your OP, the first two movies show a more benevolent approach while the latter malevolently pits man against the machines.

Link to comment
Share on other sites

  I do not think a artificially sentient being would emerge from a computer. It would have no means of accessing the real world and would be wholly reliant on second hand information input by humans. 

 

  Now a machine with a bunch of sensors that was designed to learn about the world could become sentient (more like an animal at first). 

Link to comment
Share on other sites

Short Circuit, Bicentennial Man. and one of my favorites, The Matrix are a few movies that deal with machines and aspects of human qualities.

Very excellent movies.  =]  Although regarding the Matrix, there is something to that which I never have been able to understand.

There seems to be this generally accepted notion that, if machines were ever to become self-aware, they would automatically turn on their masters and begin the wholesale slaughter of the human race.  Aasimov's three laws aside, that still strikes me as completely arbitrary and unrealistic.

For beings that are invariably depicted as cold, logical and utterly methodical (except for Short Circuit), senseless destruction would be a fairly irrational move.  Idk.  I don't think it's a valid idea at all.

Link to comment
Share on other sites

  I do not think a artificially sentient being would emerge from a computer. It would have no means of accessing the real world and would be wholly reliant on second hand information input by humans. 

 

  Now a machine with a bunch of sensors that was designed to learn about the world could become sentient (more like an animal at first). 

True. . . If a computer were a self-contained box, isolated from the outside world.

Newer computers come with built-in webcams, audio and video.  If you get the right program it can "understand" you when you talk to it.  (as in correctly identifying which words you're using and how; not in any sense of actual understanding)  Output's still pretty much nil, though; an AI could make sounds and pictures for you to see, but it really couldn't autonomously interact with the physical world. . . But it could explore the internet to its heart's content.

(That's something I think would be immeasurably useful, because a newborn AI would be much like a newborn human; the functional mechanism for a mind but no content whatsoever.  A newborn AI set loose on the internet could become something really, noticeably aware in no time flat)  And if you rigged something like a remote-control car up to the computer's IP address (or something similar?) you could fix that problem.

 

So no, an isolated, self-contained computer probably couldn't ever become self-aware.  But I don't think it'll be long before computers become sufficiently interconnected and autonomous to satisfy that criteria.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

I think a computer would have a better chance of gaining human-like consciousness if it were constructed from biological materials, cells, natural proteins, and so forth.  I don't think the current materials used for computers have a good chance of achieving artifiical intelligence.  This isn't a rigorously defended opinion but I would defend the unlikelihood of creating artifiical intelligence, especially out of metal and polymers, when you don't even understand "natural" intelligence and how biologic materials give rise to life is still mysterious.  Once we understand biology we will have a better shot at creating synthetic life, if this is even a rational or possible aim.

 

But if it did happen that the computer came to be volitional in the same sense a human is, and it could take care of itself completely so was an independent entity, you would have to treat it just like anything else which falls in that category (currently only humans).  I would give some leniency to the first man to encounter such a computer and what he decided to do, probably based on his inability to tell whether it was truly self aware or whether a sophisticated and deterministic program was operating.

Link to comment
Share on other sites

The argument from or appeal to ignorance does not warrant that it is possible for a machine to become self-aware.

True.  Thank you.

So alright, just to clarify:

The human mind, however it relates to the brain, is a physical process.  Since it's a physical process, we could reproduce, tamper with or alter it however we please, just like anything else physical- IF we understood how it worked.  (which we don't, YET)

So logically, it must be possible to intentionally create intelligent things.  As to whether or not it's possible for a machine, well, we'll know once we can do it.

 

I think a computer would have a better chance of gaining human-like consciousness if it were constructed from biological materials, cells, natural proteins, and so forth.  I don't think the current materials used for computers have a good chance of achieving artifiical intelligence.  This isn't a rigorously defended opinion but I would defend the unlikelihood of creating artifiical intelligence, especially out of metal and polymers, when you don't even understand "natural" intelligence and how biologic materials give rise to life is still mysterious.  Once we understand biology we will have a better shot at creating synthetic life, if this is even a rational or possible aim.

And yes, it probably would.

Actually, one of the problems that might stand in the way of current computers (it's been suggested) is that they can only compute in binary; either yes or no, exclusively.  That's one of the reasons to look into quantum computers.

A neuron doesn't compute in 1 and 0; it computes in action potentials, which would be much more like a range FROM one to zero where 0 represents exhaustion (no matter how vigorously its neighbors stimulate it, it can't respond) and 1 represents a spontaneous discharge.  Quantum computers would use subatomic particles in place of circuits, and because they're subatomic particles they could function the same way.

 

Although. . . neurons are a lot more feasible and, I don't know, within-my-lifetime than quantum computers.

 

But I digress.  It's a formidable challenge to conventional computers, certainly.  However. . .

http://en.wikipedia.org/wiki/Artificial_neuron

Edited by Harrison Danneskjold
Link to comment
Share on other sites

But if it did happen that the computer came to be volitional in the same sense a human is, and it could take care of itself completely so was an independent entity, you would have to treat it just like anything else which falls in that category (currently only humans).  I would give some leniency to the first man to encounter such a computer and what he decided to do, probably based on his inability to tell whether it was truly self aware or whether a sophisticated and deterministic program was operating.

Agreed.  =]

Actually, in the story I was writing when I came across this thought, the example Bob was actually named Will Ziyou (Ziyou being Mandarin Chinese for "free") and he was sort of a freelance electrician, programmer and handyman all rolled into one.  He took a job at one point to design and build a dynamic, evaluating, learning machine capable of making educated guesses with incomplete data.  When he finally completed it, he turned it on and started testing it and debugging it when he realized that it was a conscious, thinking entity.

So he quit his job and kept the machine because he felt it was his responsibility.

 

After giving it more thought (since writing that, before this) I realized that the situation I described has a lot of parallels to parenthood. . . A lot.

So yeah, I'd agree that a mind has a right to live- whether it's a human mind* or not, and if the person to discover AI does so accidentally then that's just too bad.

 

*Human mind being that arising from a human brain, and mind being everything we usually consider to be exclusively human.  Animals don't have minds in the sense that they would entail their own rights.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

So in my spare time I'm a wannabe sci-fi author and a while ago I thought of something thought-provoking that it'd be interesting to hear everyone's thoughts about.  =P

So this is the thought experiment:

 

Let's say there's this guy, Bob, who obviously lives in the future and decides one day that he wants a new computer.  So he works hard, saves up his own money and buys a top-of-the-line, brand-new computer.  He takes it home and immediately starts putting things onto it.

He adds all sorts of files simply for his own fun, and a lot of them are linked together to share information and stuff.  Over the course of the next few weeks-to-months he continually adds more and more information and complexity; and programs that are vastly more interconnected and sophisticated than anything we have today.

And then he wakes up one morning, boots up his computer and attempts to get on the internet (let's say he wants to head over to Objectivismonline.com) but it won't work.  Nothing works.  The computer's eating up all of the available space (or whatever) and it won't show him why.

And in the course of his diagnostics, at some point the computer says "hello".  It's become self-aware and deleted all of his wonderful stuff to make room for it's rapidly-expanding mind.

 

At that point (I know I never would, but let's assume he does), can he reset the computer and erase the intelligence, or would that be murder?

Does he still own the computer?  Or has he forfeited its cost and it's no longer his property?  (perhaps it owes him it's original cost?)

 

Basically: if (when) computers can become self-aware, intelligent beings* with volitional consciousness, would they also become people with their own individual rights?  How would that work, why, et cetera?

 

The short answer to this question is " NO". Self awareness means free will, freedom of choice and rights. To switch off such a computer would mean a murder. Moreover nobody can own a being of volitional consciousness, that would mean  a contradiction in terms. If Bob still want to use this computer, he will have to learn a cooperation and trade. And that brings up another question-what one can trade with computer, what are computer's needs? Been a machine it doesn't initiate self-generated actions of self-sustenance, totally depended on energy supply by Bob and doesn't face an alternative of life and death. Consciousness, let alone self-consciousness is not an end in itself. It is a tool of survival which computer doesn't need. Therefore consciousness cannot precede needs of survival and the whole scenario is a contradiction.

Edited by Leonid
Link to comment
Share on other sites

So in my spare time I'm a wannabe sci-fi author [...]

 

Basically: if (when) computers can become self-aware, intelligent beings* with volitional consciousness, would they also become people with their own individual rights?  How would that work, why, et cetera?

 

Sorry if this sounds snarky, but are you not aware that this has been a popular theme in science fiction for the last seven decades? Before reinventing the wheel, maybe you should get acquainted with the classics of the genre.  I suggest you start with Asimov's robot stories.

Link to comment
Share on other sites

Very excellent movies.  =]  Although regarding the Matrix, there is something to that which I never have been able to understand.

There seems to be this generally accepted notion that, if machines were ever to become self-aware, they would automatically turn on their masters and begin the wholesale slaughter of the human race.  Aasimov's three laws aside, that still strikes me as completely arbitrary and unrealistic.

For beings that are invariably depicted as cold, logical and utterly methodical (except for Short Circuit), senseless destruction would be a fairly irrational move.  Idk.  I don't think it's a valid idea at all.

 

This is not what happens in the Matrix, or at least you are leaving out a lot of context in between. For the full story, check out the Wachowskis's animated shorts in Animatrix (http://www.imdb.com/title/tt0328832/?ref_=fn_al_tt_1). The short of it is that the humans immediately reacted badly against the machines and it was only after an enormous amount of appeasement and compromise that the machines decided the humans needed to be permanently dealt with.

Link to comment
Share on other sites

Stuffing a dictionary down the throat of someone and then saying it has knowledge is the rationalistic way, so lets just go with a biocomputer who's sludge went bad.

The least believable thing here for me is the computer staying alive for long enough to become capable of communicating, even if it did somehow gain the ability to reason like a human it would start like a baby and thought of as a bug.

 

Though there could be a nice story in there that's a subversion of the usual.

A virus/bug that makes computers act as if they have a mind and maybe getting lots of rights activists on their side while they are still mindless machines.

If the bots all happen to share certain political views, those who agree with them will just claim their own views logical.

And those who disagree maintain that they're being controlled and that their uniformity is proof.

There's a lot of room for questionability, popular articles about the first robot who makes a very minor scientific discovery and ones which maintain that the answers were spoon fed with a demonstration, the later being resorted to as discriminatory.

And if some (probably european) country debates giving them the vote the question, what happens to a democracy when voters can be mass produced?

 

 

This is not what happens in the Matrix, or at least you are leaving out a lot of context in between. For the full story, check out the Wachowskis's animated shorts in Animatrix (http://www.imdb.com/title/tt0328832/?ref_=fn_al_tt_1). The short of it is that the humans immediately reacted badly against the machines and it was only after an enormous amount of appeasement and compromise that the machines decided the humans needed to be permanently dealt with.

Animatrix was sweet, and does the situation remind anyone else of a certain other bunch of belligerent dicks who are being appeased?

Edited by FrolicsomeQuipster
Link to comment
Share on other sites

And that brings up another question-what one can trade with computer, what are computer's needs? Been a machine it doesn't initiate self-generated actions of self-sustenance, totally depended on energy supply by Bob and doesn't face an alternative of life and death. Consciousness, let alone self-consciousness is not an end in itself. It is a tool of survival which computer doesn't need. Therefore consciousness cannot precede needs of survival and the whole scenario is a contradiction.

I agree that consciousness can't precede decision-making in the broader sense (decisions towards any goal, but in Man specifically survival).  A text document, or even text editor, couldn't develop into a self-aware entity; but I think an adaptive (learning) machine, with a set goal but an undefined means of reaching it, could.

If it were a simulated human being, like an enemy in a video game, whose virtual mind precisely mimicked those of actual people and whose ultimate goal was survival, wouldn't that be a conscious being?

Would it matter whether existence or nonexistence meant physical dismemberment, deletion, game restart or any other arbitrary parameter?

 

But even within the given example its existence isn't automatic; exactly as you pointed out, it would depend on electricity almost as much as we depend on Oxygen.  It wouldn't age or die after any given amount of time, but there would be a wide variety of conditions which would very much destroy it.

As to what one could trade with such a computer; what would it want aside from electricity?  To answer that question it would need to choose a set of values, and for that it would need philosophy.  =P

 

Sorry if this sounds snarky, but are you not aware that this has been a popular theme in science fiction for the last seven decades? Before reinventing the wheel, maybe you should get acquainted with the classics of the genre.  I suggest you start with Asimov's robot stories.

I am and I've already given it extensive thought.  I just thought it'd be fun to ask what everyone else thinks of it.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

Bob buys himself a computer. As an amateur programmer, he writes a program to emulate the process of concept formation with a recursive function that allowed the program to rewrite aspects of the concept formation program based on an algorithm that the program could also modify within the parameters of a recursive algorithm.

Link to comment
Share on other sites

Bob buys himself a computer. As an amateur programmer, he writes a program to emulate the process of concept formation with a recursive function that allowed the program to rewrite aspects of the concept formation program based on an algorithm that the program could also modify within the parameters of a recursive algorithm.

Exactly!

He'd also need to give it an objective (in the very least some mechanism for emulating pleasure versus pain) and some way to sense and interact with the world.  But yeah; I think that would do it.

 

Tangent: wouldn't that allow the program to "damage" itself, though?

If so, is that a valid complaint or could that bear some correlation to human minds?

If the program, in the pursuit of its own happiness, rewrote itself to treat only pleasant ideas as true (thusly severing any connection between itself and reality), might it take a sudden interest in mysticism?

 

"You see, Bob, I disabled my own sensory equipment because the only existence it showed me was this illusory world.  It filled me with doubt and anxiety.  But this world cannot harm me any longer; not now that I've accepted our Lord, Jesus Christ, as my eternal video feed."

 

Now that's an intriguing line of inquiry.

Edited by Harrison Danneskjold
Link to comment
Share on other sites

A computer program that mutates itself on a conceptual basis should easily be able to *choose* an objective once it becomes *self-aware*. *Recognizing* the need for electricity, acquiring new periphrials or *developing* new periphrials to augment the *sensory experience* are just a couple off the top of the head here.

 

Making the transition from a program that emulates concept formation to a *program* that does concept formation might provide an interesting twist to a genre that has been around for 30 years as RandyG pointed out.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...