Jump to content
Objectivism Online Forum

Harrison Danneskjold

Regulars
  • Posts

    2944
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by Harrison Danneskjold

  1. True. Thank you. So alright, just to clarify: The human mind, however it relates to the brain, is a physical process. Since it's a physical process, we could reproduce, tamper with or alter it however we please, just like anything else physical- IF we understood how it worked. (which we don't, YET) So logically, it must be possible to intentionally create intelligent things. As to whether or not it's possible for a machine, well, we'll know once we can do it. And yes, it probably would. Actually, one of the problems that might stand in the way of current computers (it's been suggested) is that they can only compute in binary; either yes or no, exclusively. That's one of the reasons to look into quantum computers. A neuron doesn't compute in 1 and 0; it computes in action potentials, which would be much more like a range FROM one to zero where 0 represents exhaustion (no matter how vigorously its neighbors stimulate it, it can't respond) and 1 represents a spontaneous discharge. Quantum computers would use subatomic particles in place of circuits, and because they're subatomic particles they could function the same way. Although. . . neurons are a lot more feasible and, I don't know, within-my-lifetime than quantum computers. But I digress. It's a formidable challenge to conventional computers, certainly. However. . . http://en.wikipedia.org/wiki/Artificial_neuron
  2. True. . . If a computer were a self-contained box, isolated from the outside world. Newer computers come with built-in webcams, audio and video. If you get the right program it can "understand" you when you talk to it. (as in correctly identifying which words you're using and how; not in any sense of actual understanding) Output's still pretty much nil, though; an AI could make sounds and pictures for you to see, but it really couldn't autonomously interact with the physical world. . . But it could explore the internet to its heart's content. (That's something I think would be immeasurably useful, because a newborn AI would be much like a newborn human; the functional mechanism for a mind but no content whatsoever. A newborn AI set loose on the internet could become something really, noticeably aware in no time flat) And if you rigged something like a remote-control car up to the computer's IP address (or something similar?) you could fix that problem. So no, an isolated, self-contained computer probably couldn't ever become self-aware. But I don't think it'll be long before computers become sufficiently interconnected and autonomous to satisfy that criteria.
  3. Very excellent movies. =] Although regarding the Matrix, there is something to that which I never have been able to understand. There seems to be this generally accepted notion that, if machines were ever to become self-aware, they would automatically turn on their masters and begin the wholesale slaughter of the human race. Aasimov's three laws aside, that still strikes me as completely arbitrary and unrealistic. For beings that are invariably depicted as cold, logical and utterly methodical (except for Short Circuit), senseless destruction would be a fairly irrational move. Idk. I don't think it's a valid idea at all.
  4. I'm assuming here that it's possible for computers (not our current computers, but someday) to eventually become self-aware. It may not be; nobody's actually sure yet. It's just something interesting to think about if you care to. Or, if you think that artificial intelligence is about as plausible as a talking banana, then we could discuss the rights of the banana. I wouldn't consider it moral to eat something capable of abstract thought and speech. But I think something about the O'ist concept of individual rights involves self-generating action and freedom of action, which obviously wouldn't apply to a Banana. I'm not so sure about the moral status of eating that Banana. But I stand by my earlier statement; it can't be medically good for you.
  5. Possibly similar. Are we assuming that you've gone insane, or a Banana has actually learned to speak English? If it's the latter, (the Banana actually knows what you're doing and is able to express opinions about it) and if you've earned the Banana through your own sweat and toil, then yeah it's basically the same. In any case, I don't think it's a good idea to eat anything that actually asks you not to. That's probably a bad sign.
  6. So in my spare time I'm a wannabe sci-fi author and a while ago I thought of something thought-provoking that it'd be interesting to hear everyone's thoughts about. =P So this is the thought experiment: Let's say there's this guy, Bob, who obviously lives in the future and decides one day that he wants a new computer. So he works hard, saves up his own money and buys a top-of-the-line, brand-new computer. He takes it home and immediately starts putting things onto it. He adds all sorts of files simply for his own fun, and a lot of them are linked together to share information and stuff. Over the course of the next few weeks-to-months he continually adds more and more information and complexity; and programs that are vastly more interconnected and sophisticated than anything we have today. And then he wakes up one morning, boots up his computer and attempts to get on the internet (let's say he wants to head over to Objectivismonline.com) but it won't work. Nothing works. The computer's eating up all of the available space (or whatever) and it won't show him why. And in the course of his diagnostics, at some point the computer says "hello". It's become self-aware and deleted all of his wonderful stuff to make room for it's rapidly-expanding mind. At that point (I know I never would, but let's assume he does), can he reset the computer and erase the intelligence, or would that be murder? Does he still own the computer? Or has he forfeited its cost and it's no longer his property? (perhaps it owes him it's original cost?) Basically: if (when) computers can become self-aware, intelligent beings* with volitional consciousness, would they also become people with their own individual rights? How would that work, why, et cetera?
  7. True. (Sorry; I have a bad habit of making broad generalizations like that) There are several species of apes that are, and (I'm not quite sure on this one) I think there have been a few individual elephants that we know of? And yes, in the cases of those very few exceptions, their difference from humans is that they think exclusively in concretes. (at least I would presume, from the few gorillas etc. we've taught to use sign-language) Awareness of which actions are correct or incorrect doesn't necessitate the choice to focus; I'd imagine a large part of animal learning involves automatic, involuntary conditioning. (like Pavlov's dogs) So, like Pavlov's Dogs with the bell, a mouse could accidentally press a certain button and then be rewarded enough times that it would start pushing the button, not because it understands anything about it, but simply because it reminds it of the reward. And the same mechanism would also seem to be the foundation of human learning (in infancy), but by no means all or even most of it. The difference, which is volitional and self-directed, and is what I would name as the choice to focus, would be in the use of introspection to ponder one's experiences, analyze the essentials and draw additional conclusions above and beyond what one actually saw. (Which is part of why I think introspection is synonymous with self-awareness and with the choice to focus, and is the basis of free will) So with the mouse and the button, intentional introspection would be the difference between learning to press the button for food and attempting to rip the button off the wall, dissect its mechanism and find the mysterious source of the food. It's the difference between memorization and understanding; altering one's own behavior or INNOVATING one's own behavior. "The important point is that animals don't think with concepts." Yes. And that is what's actually important to the distinction, there. I just find it likely that the ability to grasp abstract concepts stems from deliberate introspection, not necessarily of oneself per se, but of what one's seen and what one knows.
  8. Yeah, and there's nothing whatsoever forcing anyone to adhere to any laws, whatsoever. People are going to ultimately do whatever they want to do. The only point to that would be to ensure that IF people want to live properly and IF they want to govern themselves, there's an explicit and concrete set of rules already laid out for how to do that. And the only point to making it fixed and immutable would be so that a collectivist majority couldn't start rewriting it at will.
  9. (I'm not sure what Rand would've said about this, but the way I've reconciled it is:) think of it as introspection. Other mammals are capable of thinking, solving puzzles and acting, but only human beings are consciously aware of our own thoughts. (self-awareness) Because we can monitor the content and actions of our own minds, we're also able to alter them. So you can't decide whether you like or dislike any sort of food, for instance, but you can decide whether you spend all day thinking about it. And with higher abstractions you can't decide to understand or not but you can decide whether you allow contradictions to pass unchecked and unexamined; whether you scrutinize each and every idea you accept or absorb whatever you happen to hear. So the choice to focus isn't so much a choice to focus or not (although that's part of it too) it's also the choice of what to focus on. And if you're having trouble visualizing that process, my best guess is that it looks a lot like introspection.
  10. Panarchy, from pan- universal governance. I'd also make this constitution fixed and immutable, though. There are only two things it needs to prohibit, force and fraud. All other applications are simply a matter of identifying the two when you see them and acting accordingly. So I wouldn't even give the unwashed masses any mechanism for creating new laws. If you don't like something that someone's doing- prove exactly how it harms you or get over it.
  11. Exactly. Abortion is ultimately the choice of the mother in question, because it's a matter of her own body and her own future. At the point when an infant is no longer part of you and doesn't necessarily involve your future, you lose the right to do anything of the kind with it. (otherwise furious mothers could have their children aborted at sixteen years old. . . ) And that's probably why Ayn Rand drew the line (made the distinction, whatever) at birth; not necessarily conventional birth at nine months, but birth as a physical separation which also entails a separation of such authority.
  12. It really should be very simple. If the removal of a fetus from the womb would kill it then that's unfortunate, but permissible. If you remove someone's fetus and it's still alive and healthy, and killing it actually becomes another, separate task, that's senseless and abominable. If a fetus survives mutilated and deformed then it should be allowed to commit suicide later in life, if that's what it really wants. But to judge whether someone else's potential life is really worth their living it is NOT YOUR DECISION. So what we're saying here is that the value of someone else's life is open to a majority vote and, if you think they're miserable enough, feel free to cure them of the Oxygen habit. That's not a dangerous idea, at all.
  13. Amen, brother! I think something critical to the whole thing, that's missing here, is that a fetus CANNOT survive outside of its mother. When someone claims that a fetus has a "right to life", that right necessarily means a right to its mother's body, labor and life. (which is at least part of why it's invalid; you can't have any right to someone else's anything) A newborn baby, while it's still essentially a potential-human, CAN survive without its biological mother. It still needs someone to feed it and care for it, but that doesn't have to be anyone in particular. (I think that's part of why Rand drew the line at birth, between the mother's right to decide for herself and the mother's responsibility for her own actions) So the fact that it can survive without her means it deserves to survive, or something along those lines. "Based on the premise that mothers are people and infants are not they conclude infanticide (my term) of healthy newborns 'should be considered a permissible option for women who would be damaged by giving up their newborns for adoption.'" WOW. A woman who wouldn't have any problems with killing her own newborn child, but would be somehow damaged by giving it up for adoption, is a woman who isn't human and doesn't deserve her "reproductive rights". Forced sterilization should be a prerequisite for such.
  14. Well actually, if you're the victim of some crime then I think you're THE best judge for that case. (you already have all the evidence and know all the facts; you actually experienced it!) But obviously, the accused can't be the judge (who would ever find themselves guilty?) and objectivity requires that, if someone's violated your rights, you prove it to everyone else before you punish them. The problem I really run into is when a third party, neither criminal nor victim, steps in and tries to help. But that would actually be applicable to the status quo because if, for instance, someone kills a cop then the government can (and does!) take matters into its own hands. But if someone accuses the government of something then the only recourse is some other branch of the government which obviously, under the current circumstances, is somewhat less than ideal. That's part of why I'm partial towards "pO'ism"; if the wrong people get into power and stay in power, who defends the citizens from them? I'm still not sure where I stand on the whole thing, yet. But I do know that in order to figure that out I need a better idea of what objectivity requires and why.
  15. You seem to have a very good handle on it, already. But just to point out, in his experiment (one of the ones you mentioned) the subjects were specifically asked to move their fingers quickly and spontaneously, at the first moment they felt they should. (similar to the finger movements in a first-person shooter) Then, when he records the activation of finger-motion neurons BEFORE the subjects decide to act, he draws the conclusion that all decisions are illusory. . . And forgets the fact that his subjects voluntarily decided NOT to think about it! So what conclusion can we actually draw from this? That acting quickly and randomly involves little-to-no conscious participation, as any hardcore gamer could tell you anyway. I believe the state he accurately observed is called "in the zone". Although scientifically, his conclusion would be accurate of someone who lives his entire life without actually thinking before taking any action. And that doesn't contradict any part of Objectivism in any way.
  16. Does anyone know of a good explanation for exactly what objectivity requires?
  17. It seems to me like he's accurately realized that capitalism is incompatible with altruism- and is advocating the latter. His examples of a capitalistic society (as opposed to mere economy) amuse me. Privatized schools, advertising on buses and in parks and "naming rights". He's the sort of man who would abhor anyone naming anything that they create; you may, from this, draw your own conclusions about him. ;D
  18. Thank you! =] And the reason I asked those two questions is because I think I've read about what you're describing in that book. The gist of it was that there was this society inside the moon, where there was always an airlock within walking distance and people who went around bullying people and violating rights tended to see the opposite side of them. This caused everyone to be the nicest and most polite people you'd ever meet. And whenever someone needed a judge to sort something out they'd hire someone at random to arbitrate for a few hours' salary. Just an interesting thought.
  19. The lesser of two evils is nonetheless an evil.
  20. You should ask him to stop and ask himself what the purpose of money is. He's treating money as an end in and of itself but it's not; all the money in the world really can't buy happiness. (How many celebrities hate themselves and hate their lives? How many janitors, truckers, et cetera positively radiate joy- true, guiltless joy?) His entire argument seems to be based on the implicit premise that poverty is antithetical to human life; metaphysically the same as a hurricane or a plague. Which it isn't. Poverty itself isn't GOOD for you, either. But money is only a tool. It's a tool you use to exchange values, but every person chooses their own values. Some people won't be satisfied with anything less than conquering the world; some people are content to spend their lives wallowing in drug-numbed oblivion. It doesn't matter whether or not they actually get what they want; the important thing is only that everyone is free to pursue their own values for themselves!! So. . . 1. Loans. If two good parents truly want to send their child to school, but can't afford to at that point, they're perfectly free to borrow the money and pay it off later. (for that matter, the child could help to pay it off at some point!) 2. For the child of drunks and gamblers as described, education is the least of their worries. It's a straw-man. 3. Yes, because a seven year-old can learn to read while an adult cannot. "He'll never be able to learn because he'll never have any money because he'll never get a job because they never sent him to school"- anyone who fits that description has never made an attempt at happiness, doesn't actually want money or knowledge or success, and fully deserves their fate. 4. The example from 3? See 1. . . Provided his own offspring don't see right through his hypocrisy and run away to become famous capitalists! The poor who do stay poor (because it does happen, sometimes) fully deserve to. Some people want a bigger house, a newer car, a loving family, et cetera. Some people want to know everything there is to know, to do everything there is to do, to invent something or discover something profound. Someone who dies in the same state they were born in has earned nothing and deserves nothing better.
  21. http://4.bp.blogspot.com/-PCXZOuoQib0/UWb_W1S8XPI/AAAAAAAAElU/9ZGe7l8rJ_A/s1600/Gods+Plan+C.png
  22. What then constitutes the objective use of retaliatory force? Actually, I've come to the conclusion that it ISN'T necessary- but imminently practical. (although that was part of my original confusion in this thread) But if a collective monopoly isn't necessary for objective law but an individual acting as "bailiff, detective, prosecution, defense, judge, jury, executioner and publicist" isn't sufficient for objective law, then what IS the criterion for objectivity? You make some very intriguing points about competing governments. Would these agencies be exclusively judicial, or would they have enforcement arms as well? And just out of curiosity, have you ever read The Moon is a Harsh Mistress by Robert Heinlein?
  23. False! XD "If they move forward with the plan, they risk being dubbed hypocrites. . ." implies that they haven't been, already.
  24. Agreed. And any governmental right to any action it takes MUST come from the individuals it's meant to defend; its citizens. Because if there were some arcane wellspring of "Federal Authority" or something similar, for the rights of a proper (protective) government then there could exist a morally acceptable government, tasked with the safety and welfare of its citizens but authorized to use any and every means advised by its mystical source of rights. This would result in Orwell's 1984 and be considered moral. So logically, anything the government can do, it's morally able to do BECAUSE its citizens not only hold those same right but have given it permission to excersize them, by proxy. So individuals MUST have the right to the use of objective retaliatory force, necessarily. Agreed, again! That's part of why the argument I described earlier (retaliation is only moral when objective, objectivity is hard and time-consuming, delegating it is logical) would make so much sense. And since must the government derive its rights from those it protects (meaning they, ultimately, hold those rights and it acts on their permission), they could conceivably revoke its permission IF it failed to perform the required function. So a criminal couldn't un-consent his rights back on his way to prison but, in certain cases, one person or several or the majority of the citizens could revoke their permission. In reality, I believe that's referred to as "revolution!" Agreed a third time. That's the conclusion I've reached at this point. A government is useful for larger societies, but ultimately unnecessary. What IS vitally necessary is OBJECTIVITY, which is the requirement for any individual or society or government to morally use retributive force. So then, I guess the only question left would be how to implement that concretely, in reality. But I'm guessing that's a rather sizeable question, isn't it?
×
×
  • Create New...