Jump to content
Objectivism Online Forum

Does This Speech Work?

Rate this topic


Prometheus98876

Recommended Posts

Below I have written a short speech by the main character about why the AI robot she is about to 'switch on' / 'give life' needs to have rights, given that she intends her creation to exercise the rational facility it has had built into it.

Could you please tell me if this speech works. it is effective enough? How might it be improved? this version is an early draft, that I will be revising many more times, but it will be a very important part of the early part of the book, so it is important I get it right...

to help put it into context, the main character has been challenged. the challenger does not beleive her AI robot will need rights, or that rights can ever apply to a robot.

For more background on the novel concept, see my "Future Novel Concepts" thread.

“A right is a defining moral principle that sanctions a human being's freedom of action in a given context.

My creation is too human in all the ways that I consider to be significant. Sure, it won’t bleed, it will not be able to weep, but such things do not explain why a human has rights. It is not a humans flesh that necessities rights, it is their mind.

Rights are a consequence of a rational mind, or at least one capable of such. The necessity of rights is derived from the fact that someone must be free to act on their own rational judgement to live, to retain consciousness.

My robot will have to do the same thing if it wants to endure, to remain conscious. I have not given it an ensured mode of survival, it must recharge its batteries, and it must make sure it remains safe as it is vincible to bodily harm. In order to act its own rational self-interest, it must have rights, or else it cannot hope to do so.

If I tried to deny that my robot needs rights in order to live free, using its mind like I intend, then I would be denying it the means to achieve my goals. I fully intend for this robot to be an approximation of human life, so I must allow it too live. Which also means that I must allow it the tools it needs in order to live.

It is really this simple, and yet I cannot overstate its importance. If

You still do not believe me, then keep watching, see just how important rights are to every intelligent mind.”

Link to comment
Share on other sites

Assuming that these words reflect your own ideas, and that you intend to express an accurate view of the concept of rights (and to dramatize this concept in your story), I'd say you have your work cut out for you.

Several times you either state or imply that this robot "needs rights" in order "to live" — "to act in its own rational self-interest" — to achieve what it "wants"— and even "to remain conscious."

Needless to say, these are all highly dubious concepts in a man-made, utilitarian object such as a robot. I have a sense that you know this — and, vagaries of science fiction notwithstanding, I take it as a massive Freudian slip when you say:

If I tried to deny that my robot needs rights in order to live free, using its mind like I intend, then I would be denying it the means to achieve my goals.

How can "living free" entail the fulfillment of someone else's goals? How can "rights" ever serve to further the life and aims of someone other than their possessor?

The biggest problem, of course, is your repeated statement about the robot "needing rights." Even if we grant that the concept of rights could apply to a robot — meaning that one could in fact create a robot that is alive, conscious and reasoning — why then it's rights would be inherent in its nature as a rational being, and it would be quite pointless to argue about whether it should or should not have them.

Perhaps you mean to say that the robot's rights need to be recognized and respected by others. If so, then perhaps that could be valid. But if this is what you intend to argue, and to demonstrate validly in your story, then you have no choice but to endow your robot with every essential characteristic of a living, reasoning, independent human being. (Which means, of course, that such a being would no longer be a robot in any meaningful sense of the term.)

You have exactly one good sentence: "Rights are a consequence of a rational mind, or at least one capable of such." The rest is either ill-conceived, confusing, or at best extremely poorly put.

Edited by Kevin Delaney
Link to comment
Share on other sites

Remember, although it is a man-made construct, it has been built possessing the equivalent of a human mind. It might be stretching the bounds of current AI theory ,but remember it is science-fiction, science fiction usually does feature more advanced technology than is then creatable by modern scientists!

that bit you quoted is a point I have noticed in the speech that does cause a problem with the speech, I have since changed that. It should clearly be "its" goals. Perhaps if I make the end of the sentence "its goals and also I hope by doing so, mine" or something like that, I can better express what I mean. That way it is more clear the creator feels that the machines rights etc are not too serve her.

Yes, perhaps I should explain abit better. It would by its nature (if its creators plans work anyway) have rights, but whether or not they are recognised is more what I mean.

I disagree that it is not a robot in any meaningful sense.

If you think there are no good reasons for it being robotic, you are wrong. It is a robot and not a man for good reason. Because it is meant to be a work of art, to express the creators interests and ability (as well as her wish to create a peer, a consciousness with which she can interact on her level). Also, part of why I decided to make it a robot is too demonstrate using 'good' AI (ie the fully rational robot) and its opposite, as a caution for future designers of AI systems, and my expression of some of the goals AI should work towards.

As for being ill-conceived, no I dont agree. Poorly worded perhaps, but not to the point of confusion. Mind you, as i was wondering, out of context it might be very more confusing to some at least.

Edited by Prometheus98876
Link to comment
Share on other sites

What about, if I reword it like this....sorry if I am posting what is quite similar to what I have already posted...but please bear with me, I think this is a big improvement...

“A right is a defining moral principle that sanctions a human being's freedom of action in a given context.

My creation is too human in all the ways that I consider to be significant. Sure, it won’t bleed, it will not be able to weep, but such things do not explain why a human has rights. It is not a humans flesh that necessities rights, it is their mind.

Rights are a consequence of a rational mind, or at least one capable of such. The necessary existence of the rights of rational minds is derived from the fact that someone must be free to act on their own rational judgement at the most basic level, just to live, to retain consciousness. Their rights, must be recognised so that they are able to take the actions necessary to act in their own rational self interest.

My robot will have to do able to have its rights recognised if it wants to endure, to remain conscious. I have not given it an ensured mode of survival, it must recharge its batteries, and it must make sure it remains safe as it is vincible to bodily harm.

If I tried to deny that my robot has rights even though it is to have a mind, then I would be trying to deny it has the means to achieve its goal of living its life as it sees fit, and therefore attempting to deny the means of achieving my goal. I fully intend for this robot to be an approximation of the mind, including all that comes as a consequence. Which means that I and any other whom considers it to have human intelligence, must if we wish to be consistent, recognise the fact that that it has rights which should be recognised.

It is really this simple, and yet I cannot overstate its importance. If

You still do not believe me, then keep watching, see just how important rights are to every intelligent mind. And I will prove to you and everyone else that my creation does have a rational mind, so that maybe then you come to see why its rights should be recognised."

Link to comment
Share on other sites

While I understand that all Sci-Fi involves some degree of stretching credulity (which is part of the reason I don't care for it), you can't just ignore or discard facts whenever they happen not to fit your purpose.

You say that this robot "has been built possessing the equivalent of a human mind." Well, which is it? Does it have a mind or doesn't it? Is it an actual reasoning consciousness — with everything that implies — or is it an unconscious, programmed machine?

You can't have it both ways. Yet that's just what you, along with many other proponents of Artificial Intelligence, are trying to do.

"Mind," remember, denotes a particular kind of awareness. Leaving aside the fact that consciousness is, almost by definition, an attribute of a living entity, one cannot speak intelligently about a thing possessing the capacity to reason while at the same time being a mechanical invention — i.e., a thing capable only of carrying out what its creator has deemed it able to do (i.e., a robot). This would absolutely include any intuitive or learning features the programmer has built in to the design; the robot could become highly "intelligent" (in the truly artificial sense of that term) but it could never be truly be aware; it could never initiate a process of thinking on its own; it would never face the basic choice to focus, or not to bother — to think or not to think — the primary volitional act which characterizes and defines an actual conceptual consciousness.

The basic error at the root of the more outlandish AI projections is the notion that a rational faculty is some isolated phenomenon which can be manufactured and "installed" into a device, such as a computer or robot, at will. Consciousness, like everything else that exists, has a specific identity, and it entails and necessitates a great many things. In the case of a conceptual consciousness, the complexity of the surrounding factors and implications mushrooms exponentially.

For example: If you want to propose a robot which has a rational faculty, will it then be able to experience emotions? (According to you, it will not.) Does your robot have sense organs — is it able literally to see, to hear, to apprehend the facts of reality in a firsthand way, and to identify and integrate this information via an equally firsthand process of thought? Is it able to introspect — to understand itself — to examine its own needs and desires, and to select its own goals — goals which emphatically are independent of any programming, or the will or desire or design of anyone else?

Is the robot truly, in every conceivable way, an end in itself? Or is it, once again, a manufactured object — built, designed, programmed and executed, ultimately and fundamentally, as a tool to serve the needs and purposes of others?

I almost don't want to end that last sentence with a question mark, the answer is much too glaringly obvious.

The funny thing is, you could write a story about an AI, and effectively demonstrate the meaning of mind and the importance of rights, only it would have to be exactly the opposite of what you're attempting. You'd have to show how it is not possible for a mechanical thing to reason, and why such a thing would therefore be exempt from all moral and social issues. You'd have to show your inventor failing, not succeeding, at the task you're proposing to dramatize. By implication, you'd be demonstrating why the concepts of mind and morality apply only to living, conscious human beings.

As a side note: The field of AI has been around for a long time now, and while there certainly have been some legitimate discoveries and advancements that have come out of it over the decades (almost as a byproduct of the gargantuan amount of research done in it), the output of real, usable knowledge has been astonishingly small. There have been virtually no important breakthroughs of any kind, not even modest ones. It's a good, albeit sad example of what happens when a group of very earnest scientific-minded people — highly intelligent professionals with gobs of money, virtually unlimited access to the most lavish facilities and resources, a ton of time, and a high degree of respect from their peers and the world at large — set out on a task with faulty, fundamentally flawed philosophical premises at its root. It's a really quite tragic illustration of the foundational role that philosophy plays in any scientific endeavor, and in life — and why one cannot ever ignore, evade or discard philosophical principles and expect to achieve any degree of success.

Edited by Kevin Delaney
Link to comment
Share on other sites

It has what is to all meaningful intents and purposes a human mind. It is not the same as a human mind, it does not work quite the same, in that it has some subtle differences and its nature is clearly different. But for the intents of the story it can be viewed as the same.

And, yes this robot does have awareness and violition, that is the point. And yes, where it starts to ' stretch credulity' perhaps (at leasts its not a warp drive or anything like that!).

Look at Terry Goodkinds work, you might not be a fan of it, but it uses magic as a means of expressing the theme of the series and each book. Magic is totally unrealistic, yet it works in Terrys Sword of Truth series. It is a great abstraction, yes, and so is a robot with an approximation of the human mind, but that does not it cannot work or that it should not be attempted in a fictional peice of literary art. Abstraction is after all, a key feature of art.

While being a constructed entity, it is built so that once activated, to act to its own ends. That is key to its purpose, to act as a rational being etc.

While your theory of how an AI story should be done is not without some merit, and perhaps an viable alternative to my approach, I am clearly not taking it. A key part of my goal as the author, is to demonstrate my abstraction of my concept of the perfect man. In this case, it is abit like a mechanical version of John Galt.

Ultimately, since as you demonstrate and say explicity, science fiction is not something you are overly comfortable with, at least not at the high level of abstraction I am implementing. So I guess that as long as you think that, you will probably never really be a big fan of this work anyway. Still, your well reasoned commentry is appreciated, and has helped me clarify a few points,. Thank you.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...