Jump to content
Objectivism Online Forum

VECT

Regulars
  • Posts

    182
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by VECT

  1. @Devil's Advocate It could be the case that volition is a characteristic only reproducible in carbon based organic compounds. Though I have a feeling that might not be the case; the trick of it has something to do with decentralization. Nanobots used currently as a substitute for organic cells might be the key here in reproducing volition outside of organic medium. But all these are just personal guesses.
  2. @Devil's Advocate: Apologies, completely missed your post, here are my late replies: If the AI have volition, that means it will have the ability to have the final say on it's actions despite the influence of any programmings. Example would be if we got a robot programmed to assassinate certain individual, that robot refuses to do so by an act of its own will. Programmings present compulsions on a volitional AI just as natural urges influence a human being (hunger, thirst, sleepiness..etc.). The magnitude and specificness of programming compulsions can of course be far greater than compared to natural urges. But if an AI truly have volition, that means it can fight back against any programmings (whether or not it have a strong enough will to win is another story). If the AI relies completely on pre-programmed values to activate programmed emotions, then of course that just demonstrate only the moral choices of the programmer. But If the AI is actually volitional, and on top of that, have an additional module for learning (such as a rational faculty), then it can form its own values to replace any pre-programmed values by an act of will. If it succeeds in doing so, then any choice it makes and any programmed emotions triggered, will be a result of its own chosen moral values. When that's the case, the AI is no longer just a puppet to its maker.
  3. I haven't followed that closely yet the conversation between you and dream_weaver, but I can elaborate a bit more about my own reasons for stating why the Turing Test is not sufficient. First of all, it is possible to develop a test that is sufficient to validate based purely on output whether or not an AI is truly a replication of human conciousness rather than an imitation. Such a test would need to evaluate the AI's potential to learn/understand new knowledge and it's ability for applying newly learned knowledge in creative endeavours such as art, music, solution design..etc. The Turing Test, while still an impossible hurdle for modern day AI, is far too shallow of a test in the large scheme of things. Reactively answering questions in an attempt to fool humans in a short time period can be accomplished with a large enough strategically organized data base and a set of efficient query algorithm.
  4. My opinion on the Turing Test is that it's not a valid test for a true AI (volition/reason..etc.) The reason is because this test only cares about the end result, not the means that produced those results. This test also measures the end result based on subjective human opinions. As processing power of computers increase, it's very plausible in the foreseeable future that an AI built strictly from an efficient set of algorithm commanding a large database can pass a test of fooling a few humans easily. But such an AI would still just be a machine. The only way to produce a true volitional AI is first understand the principles that produce volition in humans. Pursuit of this knowledge and its replication will be what ultimately produce real artificial life. Taking it one step further, understanding the exact mechanics and relation between reason and emotion (among other things) in respect to volition, and reproducing that relation in a voltional AI will be the step to making a real artificial human conciousness.
  5. There are fundamental disagreement between rational people regarding the unknown. In certain disciplines (especially such as theoretical physics), certain key facts are not yet observed, so experts make educated guesses about what these key facts might be and build prototype theories base on them. Their colleague of course will have their own idea as to what the unknown might be and build different hypothesis. On theories that can be traced entirely back to observable facts with no element of educated guesses (which is the majority of theories that's operational in everyday life today), what I said stands. Understanding things differently because of a lack of fundamental understanding of how things work pertains to what I said above, making assumptions to fill what is unknown. If those symbols link back to real-life observable facts in a non-contradictory fashion, then the logic manipulation of these symbols is an act of problem solving; people do this because these symbols are a tool that makes problem solving more efficient and effective (imagine doing calculus with your fingers). Also solution and theory are two different things: -Theory is about understanding an existing process, about seeing the cause & effect relationship. This is objective (assuming no educated guesses are used) (this is also what we are discussing concerning understanding) -Solution is about man-made alteration to an existing process and is subjective to the different personal ideas/values/guesses held by the problem solver (this does not pertain to what we were discussing) All your example here is about solutions, not theories; they do not pertain to understanding. Are humans' problem solving just a reflection of the limitations of our genetic instinct and algorithmic programmed preferences by nature and evolution? This question again assumes volition and reason is not artificially reproducible. Care to discuss why?
  6. @New Buddha By knowledge I mean relevant knowledge (to the topic of discussion), not total knowledge about everything. For example on this topic of AI rights discussion, if you know something relevant that can have an impact on my previous conclusions that you think I probably missed when I arrived at those conclusions, posting it will force me (assuming I stay rational) to re-evaluate my conclusions and decide whether: -Your new info is relevant and something I've missed -Your new info is relevant but isn't something I've missed and have already been incorporated in my previous conclusions -Your new info isn't actually relevant The above process happens to people everyday whenever they are having a rational engagement with someone else. As for people cannot have flawless logic, I completely disagree. Simplest example would be a person correctly solving a math problem. There's just too many examples to list. As for unsolved disagreements (such as on this forum), it's not a testament of people understanding things differently, its a testament of poor communication and/or eventual apathy. People understanding things differently ultimately means one of them either is working with less relevant knowledge, or is making a logic error. Failure in communication means the failure to either identity or communicate the gap in knowledge, or failure to either identify or communicate the logic error. Two people working with the same amount of relevant knowledge cannot come to different logically sound conclusions. The only way that can happen is if reality itself is not objective. @Peter Morris The topic of your previous post have been brought up by no less than 3 different people, all of who I've replied. It's not about what other's opinion is, it's about the rationale behind those opinions. If you have a reason why you think volition cannot be artificially reproduced, post it. If it's enlightening, others myself included will appreciate it. If it's not, then at least you made an effort. Personal opinions without rationale is as meaningless as it is annoying. There's plenty of those on youtube and that is already enough for this one internet.
  7. Well, in matters of personal preferences I highly doubt anyone other than yourself will have the same level of access to the relevant knowledge (introspection, personal believes concious/unconscious..etc.) you have that led you to your conclusions. It's not really an exception to the rule.
  8. Reason is made up of two elements: Fact and Logic. Given the same access to facts on a subject, rational individuals cannot possibly disagree unless one makes a logic error or intentionally/unintentionally disregard a fact involved. Given different access to facts on a subject, rational individuals can very well disagree initially even if none of them makes a logic error. Since a person making a logic mistake or having less knowledge cannot be criticized as been irrational as long as that person still holds reason as his/her standard of knowledge, the case that rational individuals disagree with each other then can happen. But given the same knowledge and flawless logic, rational individuals cannot disagree with each other.
  9. @Eiuol My OP's "would this AI not be morally sanctioned to act in self-defence?" is the question of whether or not people ought to respect its right of self-defence. What other way can this statement be interpreted? My OP giving this AI the trait of volition is presupposing volition is the sufficient trait that ought to grant this entity rights. If volition by itself is not sufficient to grant this AI rights, then I would be interested to know what other traits is needed to grant an entity rights. If volition is sufficient to grant this AI rights, then I will try to come up with possible example of entity that can possess volition which are difficult for people to accept as right holders (such as NPC in a RPG video game) and test whether or not volition really is the sufficient trait to grant an entity rights. If people want to argue that volition is not artificially reproducible, I would like to know a reason short of appealing to the supernatural as to why that is. These are my intentions for this thread.
  10. "Yes, I agree that an AI that is capable of self defense should be afforded respect, and that we should call it sir, and listen to what ever it has to say." Re-read your own statement, does the above look like an response to my question? What the above statement of yours imply is ultimately might makes right, that if an AI have capabilities of self-defence to do retaliatory damage, THEN that capability, that might, should afford it some measure of respect and consideration for whether or not it has any rights. My original OP question was, if an AI is volitional, should it not be morally sanctioned to act in self-defence if able. Your pseudo response presupposed an exact opposite of what I asked. I didn't ask if an AI (of any kind) have self-defence capability, is it morally sanctioned to defend itself. By reason. I'm not talking about what it needs, I'm talking about whether or not rest of the rationally thinking humans should recognize the volitional AI's right to self-defence. Do a human being need "moral sanction" of others when decide to protect themselves and loved ones from a mugger? No, he only need his/her own. But does it matter whether or not others recognize his moral sanction to act in self-defence? Yes. Why? If there is no recognition then there will be laws made to prohibit an individual from acting in self-defence, then that individual will not be able to act in self-defence without threat of force or coercion from his/her peers. Ah, there you go. Might makes Right. If this is part of your central belief now I can see why you would responded to my OP question as you did. Might is important in protecting actual Rights, but it doesn't make any whim of an individual a right. If you plant a tree and grew an apple, and I point a gun at your head and tells you that apple is mine, hand it over, you might not have a choice other than give me the apple, but does my use of might actually mean I have a moral right to that apple? If it does morally, then politically, that means Tom and Bob who are watching this happening should just go "Meh, VECT is just doing what a man is supposed to do, nothing to see here." and walks away. If it does not morally, then politically, that means Tom and Bob who are watching should go "Holy shit, we got a evil'doer on our hand here, let's go grab our pitchforks and take the bastard down." The reason why you on the other hand, have a right to that apple, is not because of might or any public opinion. It's because of reason. You and I are metaphysical equals. You put in the work and planted the tree. I didn't. Therefore by reason you are the one who should have the right to that apple. This analogy goes back to the AI topic. If volition is that quality which is sufficient to make an entity the metaphysical equal of a man, then morally a volitional AI should have the political freedom to defend itself from threats without other rational humans going up in pitchforks. As for AI been forever mimics of human being that only produce illusionary results, that depends entirely on the path choose by the creators of AI: If AI creators keep abusing the raw processing power computers and try to just mimic the output of humans to try to fool people and pass things such as the Turing Test, with no regard for the actual root causes of those outputs, then yes, along this path no true volitional AI will ever emerge, even if a super mimic AI emerges that is able to fool every single humans on the planet. But if humans one day understood the principles behind volition and other areas of the conciousness, and is able to reproduce the same system in another medium. Then that AI, even if it does not mirror a human or fool people into thinking it's a man, would be a metaphysical equal of a man.
  11. You obviously didn't read my OP even when you quoted it. You talking about humans creating a volitional AI one day or you talking about humans recognizing individual rights that might be due to such an AI one day? First case is you underestimating human ingenuity, second case is you having too little faith in humanity. What are you talking about? Such a program would just be any program not taking in any data inputs, plenty of examples all around. More importantly what does this have to do with the topic at hand? A volitional AI is possible or not is not a question of epistemology, it's a question of metaphysics. Is volition created using tangible matter? If yes, then volition is artificially reproducible. Is volition supernatural in nature? If yes, then volition might not be artificially reproducible. Lots of sentiments from people who wants to argue a volitional AI is not possible. Short of appealing to the supernatural I don't see any other possible backing to this line of thought.
  12. @New Buddha You are making the presupposition that human emotion/urges is a necessary condition for Rights. (Would a trauma victim that lacks emotional response or a geneticlly defected child who couldn't feel normal urges have no right?) You are also making the presupposition that volition itself is somehow not artificially reproducible , and that humans can only mimic outputs of volition, care to discuss your rationale?
  13. @Peter Morris You are late to the party, re-read previous pages please.
  14. @Devil's Advocate: I'll explain my current view on the Right to Life, more precisely the distinction between Right to Life (moral) and Right to Life (political). Right to Life (moral) is the concept that a volitional entity should have the moral sanction to undertake the necessary actions to sustain its life, as necessitated by its nature, without feelings of guilt or condemnation. For a human being it would mean he/she should be morally able to use his/her reason to produce food/cloth/shelter..etc. to sustain his/her life without feelings of guilt about doing so. A human being can arrive at the moral conclusion that he/she have no moral right to live from corrupted metaphysical and epistemological beliefs (most prominent case, religion). An example would be members of a religion or cult believe themselves to be sinners and have no moral right to eat food. They proceed to act out their moral beliefs and consequently starve to death. Right to Life (political) is the concept that a volitional entity in a society should have the politic freedom to undertake actions necessary to sustain its life without force or coercion by fellow members of that same society. US's Declaration of Independence is an example of a document attempting to implement Right to Life (political) politically in a society. Politics following Morality means that the law should grant political freedom to an individual to do what is right. Morality following Politics means that what is morally right for an individual to do is whatever that is dictated by the law. I'm sure I don't have to explain the problem with the latter. There's a reason why the 5 branches of philosophy goes in the order of Metaphysics > Epistemology > Ethics > Politics > Esthetics. And that's also why when you are trying to bring out US's Declaration of Independence, a political instrument, as a support in a moral argument, it's an attempt to lead the donkey by directing the cart; it doesn't work. Declaration of Independence gets a lot of credit for doing what is morally right. But it doesn't dictate what is morally right.
  15. @Devil's Advocate: That certainly could be an interpretation of DoI. But, like I posted before, Morality supersedes Politics; this thread centres around the discussion whether or not we, as human begins, should morally recognize the individual rights of a volitional AI, not whether or not the United States' Declaration of Independence can be interpreted to recognize the individual rights of a volitional AI. If the Declaration of Independence can't be interpreted to do what's right, then it has room for improvement.
  16. Hmm, I concur your point on the degree aspect of volition. Whether or not an entity has the power to make choices does seem black and white. Given my previous premises concerning self-sufficiency and volition been the sufficient factor for determining the possession of individual rights, I have no problem extending these principles to an ape, as long as these principles stands the scrutiny of reason. I do not want get into a biological discussion about whether or not apes as a species possess volition. For the sake of argument let's suppose there is this one ape, that does possess volition; then given the above principles, I will say then this specific ape should have individual rights. (Also on a side note, maybe volition isn't a sufficient trait for individual rights, maybe it's just a necessary trait. Volition and reason together makes up sufficiency. Given the example of an volitional AI without a rational faculty, that can only choose between pre-programmed instincts. Also given the fact that Individual Rights of Objectivism are tailored towards an entity that survives by been productive utilizing reason, I would say this is the case.) But back to the main point, yes, if an ape or any other entity have the qualities sufficiently necessary for individual rights, then they should have it. These qualities so far are volition and possibly reason, I welcome any enlightening argument on this topic, the sufficient traits needed for individual rights, which is one of the main reasons why I started this thread.
  17. You also brought up ape in the other post. Last time I checked the current view for Objectivist is that known animals so far do not possess the degree of volition required for individual rights. Do you wish to challenge this perception by either proposing that certain animals do possess the degree of volition required for individual rights, or that volition itself is not the sufficient factor for determining whether or not an entity should possess individual rights? The AI for my example is assumed to possess volition to the degree required for individual rights by Objectivism standard. Well, if you believe apes possess enough degree of volition to be granted the same Individual Rights as humans by Objectivism standard, then that's another topic. Also your statement I quoted from your previous post does not assume the AI have volition. The want in the context of your statement pertains to volition, not emotion. Because in the context of your statement, if the AI cannot do what it wants, then it can only follow do what is predetermined, or as you put it, "bound to follow its path either by design or by logic or physics". As for emotion, while not relevant to the topic of Rights (unless you wish to propose that emotion is a necessary factor for an entity to possess individual rights), I'll indulge it. Human emotions are activated by accepted principles of personal values, after activation however the aftermath process is physical and pre-programmed. Like wise for a volitional AI, the maker can pre-program emotional responses. The AI volitionally choose it's own values just as humans do that will activate these emotions. The pre-programmed emotional responses will then react to those values in correct situations, again like humans. For such an AI then, would it not feel?
  18. What? I thought the argument of whether or not volition can be artificially reproduced was over. If an AI is volitional, then it acts because it wants to. Also @New Budda, interesting link, thanks.
  19. The pursuit of pleasure and the avoidance of pain, that's your standard of life? Oh that's easy, I can program something to that effect on Java within the hour.
  20. I see your point, my previous example of banging my head against the wall is an example of acting without choice, not choosing without thought. Then here is an example of choosing without rational thought: Impulsive Purchase The buyer choose to purchase that object not because he/she considered alternatives rationally, but because of how that object appeal directly to his/her emotions. In fact, a lot of female shoppers (and maybe some male) is able to consider alternative items base solely on their emotional response with no appeal to the rational faculty. This would be a clear example of choosing without thinking. I understand "consider" is to weight the choices based on some value. Now that value could be reason, but it doesn't have to be. The subject of this argument goes back to whether or not a volitional AI can exist without a rational faculty; Volition might not be able to exist solely by itself without some sort of weighting generated by another faculty in order for it to evaluate choices, but that other faulty need not be reason. A volitional AI then would be able to just have free-will to choose between pre-programmed instincts; a rational faulty would be optional for it's volition to exist.
  21. Does making a choice really require the consideration of more than one possible action? You can say that making a choice means the entity making the choice have more than one possible action available to him/her/it in reality, but does it have to entail that the entity itself always consider and weight these choices before choosing a course of action? I think that reality always offer a person more than one choice in any situation, but a person doesn't have to consider his alternatives before he/she acts, the person can just..act. (above is not my personal endorsement for act without thinking and I shall not be held liable for any Darwin Award winners due to reading my post)
  22. @Harrison (on the subject of volition/reason dichotomy) I'm pretty sure one can choose without thinking, there's plenty evidence of that on youtube. Given your previous question "What would I think about? (without use of volition)" I would consider like I wrote that volition is deeply ingrained into the rational faculty and it might be the case that the human version of reason cannot operate without free-will. The opposite however, that the volitional faculty cannot exist without a rational faculty, which I'm assuming is what you are trying to argue, I still don't see how that can be the case. Example: In the case of humans people can and have made irrational choices that completely bypasses the rational faculty, representing a sole act of pure will. I can at this moment choose to bang my head against the nearest wall as hard as possible for no reason at all except because I will it. In the case of AI, while humans only have a rational faculty to chose from, you can imagine a volitional AI been given multiple faculty to choose from by its maker: Rational, Instinctual (pre-programmed). Reason can be inseparable from volition but I believe volition is not inseparable from reason.
  23. Those character charging head first fighting against impossible odds is the main theme and part of the charm of that series. And that choice is rational considering their alternatives. Now if it's such that they are facing two choices, where choice one have 1% success rate and choice two have 50%, and they somehow pick choice one, that would be irrational and stupid. But's not the case, the story force the characters into situations where they either choose to try and roll a hard six on that 1% or gave up and die. The rational choice then would be to go up against the impossible and kick "reason" (excuses) to the curb. Of course, the strategist in me screams that it's completely unrealistic that anyone would ever face only choices of 1% success or gave up, that there is always another way, a better way, if one just think hard enough. But that's part of the suspension of disbelief the series asks. And in hindsight, what a low price it was for the epicness it delivered in return. The story is a metaphor, that sometimes in life when your best choice is a bad one, you man up go through with it instead of making excuses and gave up. I think the series was pretty successful delivering this message. It was years ago and I can only remember the vague outline of the plot, but I'm pretty sure I shed a few manly tears watching it.
  24. Hmmm, if memory serves Gurren Lagann itself isn't irrational by any account. The fault in the show I think might tick off Objectivist is the slogan "Go beyond the impossible and kick reason to the curb!" But then by "reason" the show really meant "excuses". "There are many paths to the top of the mountain, but the view is always the same". Since there is only one objective reality, then by the standard of reason there is theoretically one perfect philosophic system that reflects all facet of the true state of the world. Rand's Objectivism represents the largest formal effort to uncover this objective philosophic system, but any author or artist that sticks to their principles and follow the path of reason will in their own way glimpse and reflect a part of that system in their work.
  25. The founding US law you mean. I can see your point here that the "man" in the Declaration of Independence is limited to the human species; after all, the concept "Artificial Intelligence" doesn't even exist back when that document was written. Right to Life is a moral concept. US's Declaration of Independence is a device that seeks to express and implement this moral concept politically to the best of its creators' abilities. Albeit far ahead of its time and puts the rest of the known world to shame in terms of its effectiveness at implementing Individual Rights politically; that document, any document, is not above the scrutiny of reason. Politics follows Morality, not the other way around. If it's established on moral ground that Volition is the sufficient trait which act as the critical discrimination factor that determines whether or not a consciousness possesses Individual Rights to be recognized, then any political document that can't adequately recognize and implement Individual Rights for such a consciousness in a society is a political document that has room for improvement. Does ape possess volition?
×
×
  • Create New...