Jump to content
Objectivism Online Forum

Easy Truth

Regulars
  • Posts

    1579
  • Joined

  • Last visited

  • Days Won

    32

Easy Truth last won the day on January 7

Easy Truth had the most liked content!

Recent Profile Visitors

3383 profile views

Easy Truth's Achievements

Senior Member

Senior Member (6/7)

151

Reputation

  1. Are you conceding that human animals are machines with volition?
  2. Yes, but the hypothesis has to based on something that is possible. I may have to amend my position on the issue of values. I may be thinking that a machine can never have "moral values". But maybe Grames and maybe Greg have a point, it can have values. Right off the bat, collection of data is something it is going to do, as if it were motivated. Now does that mean it has that as a value? Maybe and maybe not. Sort of like ascribing "love" to two magnets. As if they love each other, attracted to each other etc. It is as if this machine we are talking about has values from the outside similar to a plant that wants to grow toward the sun. We conclude that based on it's behavior. But then, that is not enough. As in, water pours downward, it acts as if it wants to do that, but cause and effect does not mean the cause wants the effect. Or does it?
  3. The question to be answered would be does it want to be happy. And before that is "does it want", or "can it want". The only way would be if a human can transfer themself into "it", and then back. But only that human would know for sure. Unless we have telepathy where that transfer between the human and the machine can be confirmed through others experiencing it (through telepathy). But the problem is "our" human consciousness, would be put in a deterministic machine. If our brain is a deterministic machine, then we should not have free will. But we do. So to make the argument for the eventual existence of such a machine is to argue that free will is an illusion.
  4. For an AI system to confirm that Objectivism is correct, it will have to be alive. Otherwise, it cannot conclude what is right or wrong. Unless, it is reading books or getting such input and concluding things. The AI you are talking about can only be consciousness. One way to do it is to clone a human and hope that it will be exposed to enough information and is honest enough to conclude that Objectivism is correct. You have to be open to the fact that it may conclude that communism is better. Why would a machine become interested in certain sciences and philosophies and not others? The question of motive has to be answered without magical/mystical/fictional assertions about things that WILL exist. Your assertion is ultimately arbitrary and faith based.
  5. No, not either, they are different. AI currently could be said to use logic in what it is doing. It can use logic without any relation to reality if there is such input. Give it garbage as input and it will find patterns and it will emulate or conclude based on that. But a sentient being, has to be alive. It is life that requires it to identify based on reality. Otherwise it perishes. It seems like I am arguing that its emergence is based on some evolutionary algorithm or process. I'm not sure about that right now. The key element there is the goal/motive is to survive. That is what gives rise to values. Not just using logic. Meanwhile, this can be programmed in. That you must survive, identify ways to survive. Ultimately it is a machine, motion that does not have free will.
  6. You and Grames have been watching too many Harry Potter movies. Of course I am saying it is deterministic. They are MACHINES. The idea of "advanced enough" is preposterous. Our writing is not advanced enough to turn fiction into reality ... but some day ... We are not advanced enough to realize that in some parts of the universe 2+2 is 5.5674
  7. No Grames, you can't lump in aliens that are alive with a machine that is not alive. The life form would value life in some way. But I can see the idea that the machine would also be "responding" to objective reality. The microphone would perceive sounds, camera vision etc. The pattern recognition it does would be deduction, and coming up with the pattern to recognize would be some sort of induction. Therefore, it is "motivated" to induct and deduct. That would be it's valuing at it's foundation. But once it perceives recognizes patterns, it has to do something about it to have values. Values manifestation would be some type of goal directedness, wouldn't it? So we don't know what the goal is that it would come up with unless it is programmed in. But if no goal is programmed in, you're saying that it would come up with a goal. I don't see the reason for that. Why would an AI machine inevitably come up with goals or values.
  8. I'll grant you that. Yes, the motivation can be wrong. The problem is we all have limited knowledge and it will allow for wrong conclusions. If we end up arguing for accepting every opinion, no matter how you feel about it, "because you might be wrong", then it is a recipe for altruism i.e. love thy unknown neighbor without preference or judgement. In a sense, we have to be willing to live with people that are wrong ... sometimes ... and in some ways. That would include the ones with genuinely bad opinions, and those who punish them.
  9. I assume you are implying that it will "conclude" that survival is the goal. Based on what dataset? Unless you are talking evolution, meaning AI that survives is the one that makes this accidental conclusion. It also implies random mutations for that conclusion to be introduced in the first place. On it's own, what in the universe causes a decision to survive? The concept "existence" or "I exist" does not motivate. The motivation was there before the concept in a human life.
  10. The question is monitoring by whom? Children should be monitored. Certainly a parent should monitor, but someone else with "prevailing" agendas is threatening. When and how does this other source of monitoring get its authority ... legitimately/rightfully so?
  11. I assume it will not stand once it goes up the court system. Is there a reasonable danger that it will remain in place and spread?
  12. Even if the position/argument is that introduction to western ideas has been good, there was some bad. The issue of transition is not discussed in Objectivism or Libertarianism. It's actually the basis of most disagreements. Every political system or activity has benefits for "some". We would probably argue that the benefit of individual rights applies to all. And it's out growth, meaning a system respecting individual rights causes laissez fair Capitalism. The problem is that colonialism also brought in some unjust privilege for some. That is at the core of the argument against it. If you take that out, yes it is beneficial. And were the British able to omit that unjust aspects, I don't know. The argument for the "good system" has to be based on justice permeating the whole of society. One has to argue for some sort of objective benefit otherwise you have to deal with "benefit to whom?", "valuable to whom?" You will encounter retorts like: Every system does not benefit some including individual rights. The unjustly privileged won't benefit. For instance, those who want to be house pets, be told what to do, be taken care of, see "liberty" as a threat. Currently, even those who see the boom and bust periods of the current economic system, even in extreme cases like those who lived through the Greek financial crisis and got part of their wealth taken to fix the situation will support government interference with a privileged group running the show.
  13. Public vs. private in this context means means healthcare delivered by an entity that is subject to liability. Once it is public or universal or owned by everyone, the responsibility can be evaded more easily vs. an entity that you have a contract with.
  14. An AI system could come up with a huge number of permutations of musical notes and copyright each one. At that point preventing any new piece of music or writing to be owned by another. I'm not sure what there is to prevent this from happening. A person could then only use public domain stuff or pay up. In the case of patents there is a payment that has to be made that may mitigate a system that spits out "random ideas". With the internet, it seems the concept of property will be attached to "community" which will be "agreements" rather than determined by geography. And that kind of property would exist based on agreement … per community. I assume someone has come up with a way to deal with it.
×
×
  • Create New...