Jump to content
Objectivism Online Forum

Sara Conner vs. Terminator

Rate this topic


EC

Recommended Posts

The last few weeks I have been enjoying Terminator: The Sara Connor Chronicles, Monday nights on Fox, a spin-off of the Terminator movies. It's a good show and if you like sci-fi you should give it a try. But my purpose in creating this thread is deeper than just recommending a new t.v. program.

Sara Conner wants most of the A.I. creatures-- the Terminator's-- dead, and for good reason as they are out to destroy humanity. Her obsession with killing all the bad robots seemed to have morphed into hate of *all* technology, which I thought might ruin the series-- at least for me. However, the series seems to be taking a much deeper twist, in the character of Cameron the teenage female terminator sent back in time by Sara's son John to protect, well, himself.

At first I thought the series would just play on the the emotionless A.I. standard. I was wrong. If you remember, the movies also showed that such creatures may not lack emotions. For example, the Governator and John bonding in T3.

The main point I'm trying to make and of what I want this thread to be about is-- if a true artificial intelligience is ever to be developed it would by definition have to be an emotional creature.

While it's true such an entity would not, at least at first, have the chemical components that highten human emotion, it would still have to possess emotions in some quantity.

For this creature to be intelligient and not just a machine performing cumbersome calculations and work it would have to possess values that further the creatures "life" and avoid those that lead to its destruction. Assuming that the robot is taking these actions for some purpose of its own choosing it must be getting some payoff for these actions. The payoff is a rational emotional response.

As Objectivists we know that emotions are not uncaused. If you are a rational man, i.e. you possess a rational *conciousness*, you therefore possess a rational emotional response o your actions assuming you have not accepted irrational premises. My "theory" is that there is not a "line" separating true "artificial" conciousness from a "real" one because A is A. A conciousness is a consciousness,and conciousness's possess emotions. Whatever nature can create, so can man--eventually,given the right incentives.

Given the nature of reality, if man creates a conscious being then that being will possess emotion,because conscious beings possess emotion. A is A.

I'm just glad that a television show seems to be recognizing that fact of reality in Cameron. Seeing her dancing ballet because she wanted to at the end of the last episode shows that they are.

Link to comment
Share on other sites

I think that it isn't necessary (or even likely) for an artificially intelligent machine to have emotions. Don't forget that when a scientist talks of Artificial Intelligence they mean a machine with the ability to reason, and by reason they are just talking of an extrapolation on the 2+2=4 kind of functions that computers currently make, only instead of 2+2=4 the equation is something like WALL TO LEFT + WALL TO FRONT = TURN LEFT.

What purpose would an emotive machine have? What advantage would it have?

Link to comment
Share on other sites

What purpose would an emotive machine have? What advantage would it have?

The apparent presumption of your question would be that the consciousness was created with emotion for a purpose rather than the consciousness was created and the emotion developed after the fact. The purpose of emotion for such a consciousness would be the same for it as it would for us; to serve as a barometer regarding things that affect our values. An AI that was conscious would be conscious of itself. If it were conscious of itself, it would likely be aware that it can either continue to exist, or it could cease to exist. IF (and yes, that is a big if) it valued continuing to exist, emotions would serve the purpose of either being warning flags regarding the dangers to it's existence, or bells and whistles for things furthering it's existence.

At least that is my take on what he's suggesting.

Link to comment
Share on other sites

The apparent presumption of your question would be that the consciousness was created with emotion for a purpose rather than the consciousness was created and the emotion developed after the fact. The purpose of emotion for such a consciousness would be the same for it as it would for us; to serve as a barometer regarding things that affect our values. An AI that was conscious would be conscious of itself. If it were conscious of itself, it would likely be aware that it can either continue to exist, or it could cease to exist. IF (and yes, that is a big if) it valued continuing to exist, emotions would serve the purpose of either being warning flags regarding the dangers to it's existence, or bells and whistles for things furthering it's existence.

At least that is my take on what he's suggesting.

I understand your point. Though I'm not sure consciousness would necessarily lead to emotion in a machine. Is the decision of an AI to exist (or not) necessarily emotional? Could a 'conscious' being be it biological or mechanical not make value decisions without emotion. Can things not be beneficial or harmful without being good or evil?

I don't know one way or the other, I'm just asking the questions.

Link to comment
Share on other sites

I don't know one way or the other, I'm just asking the questions.

I wasn't so much supporting his premise as I was interpreting it and answering the question regarding the emotions purpose.

That said, I'm not aware of concious thing that does not have emotions. I'm not sure why a mechanical consciousness should be excluded just because it is mechanical. Conciousness is conciousness.

Edited by RationalBiker
Link to comment
Share on other sites

Why is it important to people whether conscious things have emotions? I think the primary reason is the association of emotion with morality. If morality cannot be rationally justified, non-emotional things cannot have any rights - hence the need for the animal rights movement to prove that animals as primitive as rabbits and fish can "feel pain." If a rational entity is non-emotional, then it must be a egotistic sociopath - since egoism means sacrificing others for one's whims. Thus the fear of killer robots - the inevitable product of cold-blooded, "selfish" reason unrestrained by emotional sentiment.

So, does general AI require emotion? Well, emotions are a type of thinking. We know that the nature of the human mind requires us to rely on emotion in addition to explicit logic. We won't know for sure what the requirements of an artificial intelligence are until we encounter it. It might be that all intelligence require something like emotion, something in between, or perhaps it's just a quirk of biology.

Link to comment
Share on other sites

Emotions are automatic responses, the "reflexes" of the mind. Animals seem to have developed emotions before reason (emotional responses are observable in animals), but this does not mean that they are a prerequisite for reason. In fact, for humans, emotions are only truly useful because we think so slow (consider how long you take to fully integrate the consequences of a meaningful event in your life). None of this needs to hold true for the hypothetical rational machine.

Link to comment
Share on other sites

So, does general AI require emotion?

I emphatically answer YES. To be truly rational means to possess volition - but without the capacity to feel, without happiness being possible to the AI, it has no reason to choose to perform any thinking. Without emotion there will be no cognition, irrespective of what it is technically capable of.

JJM

Link to comment
Share on other sites

I'm just glad that a television show seems to be recognizing that fact of reality in Cameron. Seeing her dancing ballet because she wanted to at the end of the last episode shows that they are.

I have noticed what seems to be a mature attitude when it comes to writing for Cameron; I thought it may be coincidental at first - after all, they've only had 4-5 episodes - but the dancing scene hinted at an interesting depth of character. I hope they continue down that line with her.

Also powerful was Derek Reese's reaction to her graceful movement - I saw it as a mix of curiosity, fear, and disgust. (Of course, kudos for Summer Glau; it was a very beautiful dance.)

So far I've enjoyed SCC and hope that it takes on a life of it's own come Fall ... and Fox doesn't do to it what they did to Glau's other show - I'm still bitter about that one!

Link to comment
Share on other sites

Regarding the issue of AIs and emotions - and I've put a lot of speculation into this being the sci-fi fan that I am - the question that comes to mind at every turn is "What would an AI value?"

Generally, we experience joy at the achievement of a value, sadness at the loss of a value, fear when those values are threatened, and frustration when someone prevents us from achieving them - whether that value is gaining a new love or buying a hot new gadget from Best Buy. To the degree that we value something in our individual hierarchies, the accompanying emotional reactions vary.

But what could a robot value that would pretext emotional reactions to events which affect those values? Look at just the fundamental value, survival.

1. Can a robot value its "life" if the software that makes the robot what it is can be reinstalled on another machine? I doubt it - threatened, the robot might protect itself as a programmed reaction, but having a backup would take away its incentive to do so outside of its programming. This is displayed by the Cylons from the new Battlestar Galactica - when one is fatally injured, its "consciousness" simply downloads into a new body; it's only when such downloads are threatened by the lack of a replacement model that it fears for its life and fights harder to protect it.

2. Survival is made possible through sustenance, and for a robot this would mean energy. It must be motivated, beyond mere programming, to ensure that its source of energy is readily available. If that energy is something like solar energy, it knows it can operate continually. But, if its source of energy were something limited, it must judge the longevity of its operational capacity when physical distant from such a source. However it would not fear such distance if, like a cell phone, it can be plugged back in and recharged. Therefore its means of existence must be tied to longevity - as long as it's powered up, it's alive, but when the battery is drained, it is no more.

3. Lastly, survival has temporal value. People know they only have one life, and that their bodies weaken over time, so we exercise and diet to ensure our longevity as free of sickness and injury as possible. Would a robot value its safety and "health" if it could live an "unlimited" life through replacement parts? No, because such a value is provided externally; it may be programmed to construct and store replacement parts, but - as Rand said, paraphrasing - "not dying isn't the same as living".

So, just in the realm of survival, for an AI to value its existence - and thus experience "negative" emotions such as fear or sadness - it must be certain that its existence can end permanently, that there is no "afterlife" in another body, that its means of survival is limited, and that its survival will be hampered through physical damage and breakdown.

And that's just survival, the basic value. What does a robot need freedom for? What can a robot attain that will bring it joy? What will inspire impatience as external forces hamper its achievement?

My assumption is that, if a self-aware AI is ever created, it will have at best the self-awareness of a household pet. It may be able to communicate, learn, and even be curious, but any action it commits in the service of self-preservation will be just as it is in an animal: a programmed "instinct" which overrides any other task to which it is implemented.

This is why, fundamentally, the robots-rise-up-against-man story seems so silly to me. The only one that ever made sense to me was the backstory to the Matrix, where the energy produced by human action was harvested by machines that could get that power from no other source. Still, I'll watch Terminator and Galactica, because ... well, I'm a sucker for robot action. :D

I know, I know - audiences don't take the time unless the robots are destroying things spectacularly, but I would like to see some more positive depictions of robots in sci-fi. Data, R2D2, C3P0, Andrew Martin, and Dr. Who's K-9 just aren't enough, and at times, they're just annoying.

Link to comment
Share on other sites

  • 3 weeks later...
I emphatically answer YES. To be truly rational means to possess volition - but without the capacity to feel, without happiness being possible to the AI, it has no reason to choose to perform any thinking. Without emotion there will be no cognition, irrespective of what it is technically capable of.

JJM

For some reason I didn't think this thread showed up. I thought I had made some mistake in posting it originally and never tried again, so it surprised me when I found this, and was surprised even more when there was replies so long ago now.

Anyway McVey's quote above was the gist of my idea, although I now need to think about it more since about a month has passed.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...