Jump to content
Objectivism Online Forum

nanite1018

Regulars
  • Posts

    365
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by nanite1018

  1. Now, maybe I'm missing something. But I believe there was an essay (I think by Rand) where the issue of the trail was addressed, and that it would be perfectly fine for me to go in and put a toll on it, because nobody owned it. Perhaps I am mistaken, and I can't remember where it was. Maybe you remember something about it? The individual Na'vi did not create that Tree, all of them did without ever really meaning to. As it stands, they had no real right to it. Now, if you wanted to make the argument that the Tree wasn't the Na'vi's but Eywa's, or whatever the planet-tree-spirit-thing was called, you might have more of an argument. However, I don't know if property rights apply to an entity which is made up of other, independent entities (the trees, presumably, can survive independently). I'm not sure how to even begin to tackle that problem, and I'm certainly not certain if the entity, whatever it was, was a conscious rational being with full rights of its own. I doubt it, but that's really sort of up in the air. I agree that we should give individuals their property. But I don't see how the idea applies in this case, because the Na'vi didn't create the tree, it was just there, and any modifications happened a long time ago, through the actions of thousands of them, probably without much planning, and certainly without any belief that anyone owned any of it (and so ownership would now be impossible to determine). They cultivated no land, and they had no real homes. It seemed, from what I saw, that the tree was inconsequential, and that everything in it could easily be moved without any consequences (if they'd listened). I don't see how you own any cave just because you're in it. You didn't make it anything, you just are sitting there, maybe with your rucksack unpacked on the floor. Well, if I want to turn the cave into a mine, I can tell you to move, because I'm mining here, and a legitimate government gave me permission to in this ungoverned place (i.e., the rights-respecting one on Earth; similar to those of the Europeans who came to America). If you don't move, I'll start to drill while you're still inside, your choice, but I'm still going to be mining here whether you like it or not. And if you shoot at me, then I can shoot back (just as the humans did). I see no reason why you just sitting somewhere somehow makes that area yours. You didn't make it, no one gave it to ya, so it ain't yours.
  2. You need to give an argument for why an illegitimate government has a right to anything. Or why my analysis of a trail created by many over decades is either a) fundamentally flawed as an analogy or is flawed in some way as to its conclusion: that the trail is no one's property. Now, the Na'vi's tools and personal possessions were there's, certainly. But the tree itself was no one's, at least as to the best of my ability to determine. It may not be their property, but since the Na'vi had no concept of property, and since the Home Tree was not any of their's and collective property through voluntary pooling of justly acquired property does not apply here, and since their government was illegitimate, then the Na'vi have no claim to it either. The Na'vi civilization can survive (and thrive, apparently, from the other examples scene in the movie) without the Home Tree, the humans will not be so lucky without unobtainium. The Na'vi were warned. Perhaps the method was a little violent, but the idea of moving in to mine is not problematic. Now, to my knowledge, the movie does not detail any "treaty" of any kind with the Na'vi. I disagree with the very concept of making a treaty with an illegitimate government. But if you do so, then you are claiming it to be legitimate, and so you are not free to ignore such a treaty when you wish. At best, you can declare an actual war with said "legitimate" government, or however else people actually break treaties (I imagine wars, normally). But to simply ignore it because you retroactively decide that the government you made it with is illegitimate is simply wrong. As for your allusion to Saudi Arabia: You do not have the right to slaughter everyone in the country (they have rights nevertheless). But they have no government, and as such it would not, on principle be wrong for us to invade and take over. We could, for example, fix their laws so that everyone is equal before it, get rid of its barbaric punishments, support free enterprise (denationalize the oil industry for example), end the bans on behaviors (no more burkas, no more bans on alcohol, drugs, etc.). Life in Saudi Arabia would actually be far better if we were to do so. Now, I do not think it would necessarily be worth the cost to the United States to do this, but it would not be wrong on principle, since their is no legitimate government, and we could make it a rights-respecting part of the world. Whether it is moral or immoral depends on if the benefit to America is worth the cost (probably not). I don't think that process/analogy in particular applies to the Na'vi, but I thought I should respond anyway.
  3. I've been out of this thread for a while, but looking over the last few pages, it seems that Jake Elisson has the right track, in general. The Na'vi were tribal in nature. Their culture forced people to do things like risk being ripped apart by giant bird-creatures in order to be an adult and given all that comes with that status. They accepted a collectivist morality and political system, and their civilization was (as evidenced by the above) at least partly "barbaric" in nature. Heck, they even called themselves "the People", how can that not be a collectivist system? A collectivist system does not respect rights, by its very nature. The principles which allow rights to even exist are not recognized in any form. So, such a system has no moral authority whatsoever, and no right to anything at all. So, in my opinion, in many places on this planet, there is no such thing as a government, merely mob (both by "mob" as in mobs of people and "the Mob" as in a criminal organization) rule. Venezuela, obviously Somalia, N. Korea, Iran, Saudi Arabia, etc. are examples. From that, the People had no claim, as a collective, to that tree. Individually, they did not recognize the concept of private property, so I don't think one can claim they have it. But more importantly, even assuming this is so- the Home Tree was at best an example of something which had been "produced" as a result of the work of generations of Na'vi without a conception of private property. It is, roughly, akin to a trail created by hundreds of people walking that way over and over day after day for weeks, months, years. Now, perhaps I'm wrong, but all of those people do not have a right to that trail. Nor do any of them individually have any right to the trail. So, in a similar way, the people in the Home Tree have no right to it, because it was produced by none of them individually, and they have no individual claim on any part of it. So, I'm not really violating their individual rights by telling them to leave. One other important point, is that the back story for this movie (which Cameron has stated in interviews but never bothered to say in the movie) is that humanity's interstellar travel abilities are largely dependent on unobtainium, and the supremely dominant power supply for their civilization is based on unobtainium as well. So, for the humans, getting a supply of unobtainium isn't just about profit, it's about the survival of large numbers of people, and the maintenance of their civilization. (Roughly akin to oil, if it was localized, say, only in Saudi Arabia, but the whole world used it and got 80% of their power from it in some way or another). And the Na'vi could easily have left Home Tree, since the movie depicts thousands of other Na'vi living in all sorts of other places on Pandora, and there were, as the humans said, lots of trees around in any case. So the humans were in a near lifeboat situation, the Na'vi didn't have a right to the Home Tree in the first place, and their leaving would not have meant their destruction. In those conditions, I see nothing particularly wrong (especially since the Na'vi were going to kill them if they'd tried any other way).
  4. The only problem I see in this is the problem of how do you deal with secession (is it okay, when is it right, how can you do it, etc.), and in a similar vain, the creation of a government in the first place. For example, everyone did not say the Constitution was good with them back when it was ratified, so did they have a right to impose it one those who wanted to stay under the Articles of Confederation? After all, the nature of the state governments changed when transitioning from one to the other, and so can it really be said that the people in the states prior to ratification really "consented" to the actions of the state when it switched? The problem, and possible contradiction, that I see, is that the only way I can think of to resolve this is to say "well, if you don't like it, you can move." Well, that's what socialists say. I mean, do you really imply that as long as a nation's borders are open to all goers (and comers) that you can change the government and its fine? If so, then why can't the government undertake other activities too, like welfare (let's assume they modified the Constitution to actually make such an action legal), since if you don't like it, you can leave (and therefore, you are, by staying, agreeing to pay the taxes, and so are really agreeing to a sort of contract)? If you say that that is not right, then how do you defend anything like the switch from Articles of Confederation to the Constitution (and any other switch of government over any region, no matter how small, such as in secession or treaty)? I don't really have an answer to that question, I was hoping someone here might be able to shed some light on it. It was brought up during a discussion I was having with a so-called anarcho-capitalist, and I said I wasn't sure how I would be able to respond.
  5. Just a quick point of correction: Apollo cost about $150 billion in current dollars (20-25 billion back then). But I agree with you that it is likely that someone would have gone to the moon (probably cheaper), maybe not in the 60s, but perhaps in the 70s or 80s. Haha, Google would probably be able to fund such a mission now through advertising dollars alone (how much would Coca-Cola or General Electric pay to have a commercial filmed by the first people to land on the moon in 40+ years!). Just a thought, do you hear me Eric Schmidt? Setting up the first extraplanetary data center sounds like a good move to me, haha.
  6. So it is not rational to use data concerning the growth of computing power to make predictions about what computers will be capable of in the future and base my actions on that? Video game companies make their money often times on that. Ray Kurzweil has made a lot of money doing it in some of his business ventures. If you have statistics about the growth of computing power, and you look forward and see nothing to block that growth (takes some knowledge of why it is growing), then you can predict with some confidence that in x years computers will be able to do y calculations per second and have z bytes of memory. Basing your actions on "guesses" based on statistics and trends (provided you understand the reasons underlying those trends), is not irrational. Obviously, saying "you had zero husbands yesterday, and one husband today, so by the end of the month you will have several dozen husbands" (xkcd reference) or "By 2014, Gillette will have either a razor with 14 blades, or a razor with several million blades, depending on if this is a linear or exponential curve" is absolutely ridiculous (though amusing). Blind extrapolation isn't a good idea. But extrapolation based on trends coupled with an understanding of what gives rise to those trends is not irrational, it can help keep you ahead of the curve. Otherwise, I assume you would argue that having money in a 401k is totally stupid, as just because the stock market has historically grown at x%, extrapolating that into the future with any confidence at all is wrong and irrational.
  7. If you can emulate the human brain, you can create a human mind in a computer system (with proper routines for modeling speech, movement, and our various senses, you can actually interact with it in a virtual environment or the real world through robotics). So you will have created a human consciousness inside a computer system. I didn't really design the intelligence, just copied it from nature, but I did change the substrate in that case. And since it obviously is not a human, it doesn't have human dna or a human body in the world that we can interact directly with, we would need to figure out how to interact with this computer-based mind. The rough estimate, a liberal one, is 10^28 calculations per second, which is roughly the capacity it would take to emulate the flow of materials through the cell membrane of the neurons, the interactions at the synapses, etc. Most neuroscientists (based on surveys) do not think we will have to model down to the level of individual molecules and protein folding and even quantum effects, etc. in order to capture the behavior of the brain at the macroscopic level. I trust their opinion. Some suggest it could be significantly lower than 10^28 cps, but that's a pretty high estimate. For background, that's around 10 trillion times more powerful than the most powerful supercomputer on the planet at this time. My point 4 was evidence for the possibility of uplifting animals, i.e. making animals with a human level intelligence. We are working to create new intelligent life forms, and we will continue to work on it, likely until we either succeed or we go extinct, whichever comes first. That is because it would arguably be the greatest achievement in human history, and would quite possibly allow us to do things that are highly desirable and otherwise likely impossible, like eliminating involuntary death (by, for example, scanning someone's brain to sufficient detail and inputting that into our brain emulation system, if such scanning is possible). But pure curiosity is likely to continue to drive people to attempt it until someone succeeds.
  8. I have read a number of novels from Stephen Baxter, which discuss radically different life forms, such as a species so far in advance of humans that we do not understand virtually anything about them or their technology (called the Xeelee), among others. We don't interact diplomatically with them, and in fact go to war against them, but that is a combination of them seeing no reason to interact with us, and our irrational aggression against something we don't understand (we lose in the long run, btw). I'll read your suggestion, eventually (I have "the Culture" series by Iain M. Banks to read first). To Jake Ellison: Your criticism seems to be based on your assessment that volitional AI or uplifted animals, etc. are all fictional and there is no reason to think they will ever exist. This is where I disagree. I understand Objectivism is not meant to be science fiction. My position is that Objectivism is applicable even in a "science fiction" context, or any discussion about things like volitional AI (which I see no reason to think is impossible, and many reasons to think it will be created in this century). The basic principles do not change. I have stated it has no importance to human life at this point in time, but if we are ever going to have a conversation about such things (whether you think we should or not), Objectivism can still apply because its basic structure remains applicable in the context I have already described. Objectivists can go on using "man" as their concept for the subject of the philosophy, because it does not change its application in human life, for now. You challenged me to give you reasons to think that such things will occur, keeping in mind the Objectivst definition of reason. Okay: 1. Computing power has been growing exponentially for going on 40 years, and there are numerous technologies being developed to allow it to continue to do so beyond the limits of the computing substrate of silicon. 2. Models of the brain are getting better and better. 3. Methods of observing the action of the brain are getting exponentially better, and have been doing so for years (helping reason 2). For example, a relatively crude model of one half of a mouse brain in 2008 exhibited patterns of behavior similar to those observed in actual mouse brains. 4. Biotechnology is advancing rapidly, projects for creating truly artificial life are advancing rapidly, and our studies of animals are making significant progress as well (for example, we just learned that there is good reason to think that dolphins are second in intelligence only to humans on this planet, as opposed to apes being in second place, as was previously thought). Those are all identifications of trends in reality, and their underlying causes (funding, number of researchers, advances in technology driven by market demand) are going to continue to exist indefinitely, at least based on current information. I observed them, identified them, and produced a conclusion from them, that is, I used reason to reach the conclusion that the development of volitional AI or uplifted animals are likely to occur some time in the next century (liberal estimates for the computing capacity necessary to emulate the human brain, coupled with modest estimates of computing power advance, place our achieving that capacity in this century). So, there are reasons to think they exist, and so discussions about them are not fruitless or purposeless.
  9. The idea behind all of this is, I think, that for the purposes of any discussion about future technological advancements, or future events in human history (such as detecting another civilization somewhere in the Universe), we can use the concept I discussed as "person" to describe the sphere of all possible entities that the philosophy of Objectivism applies to (and thereby serve to inform our discussions of possible consequences of such things, much as it did in part in the "Avatar" thread). It serves no purpose other than that for the time being. It does not effect the content of Objectivism, its principles or virtues or ideas, at all. It changes nothing in fact, except identifying the class of possible entities which would by their nature have Objectivism as their proper philosophy for living, precisely so that we can hold such discussions. Now, a criticism of this seems to be that you shouldn't talk about the future except if you are discussing stuff based entirely on the past. So, you can talk about, perhaps, the possibility of major earthquakes in LA (just made that up as an example), but not the creation of a new intelligent, self-aware life form by man. It does not matter, apparently, that there is reason to think such an event will happen eventually, and possibly in some of our lifetimes (I am only 19 after all), and that it would be a world-changing event and would cause all sorts of problems if we had not considered it before. I am being slightly sarcastic in the above, but it's because the position just seems silly to me. Discussion about the future is important, even discussion about speculative things, because if they are possible (and not "conceivable" but possible as in "we have reason to think they may very well occur"), then we should consider such things. They would obviously affect our lives, and perhaps someone might want to try to bring such an event about as a major goal of their's. Simply ignoring them because they haven't happened yet is not a wise choice. At worst, it provides a stimulating conversation, making you exercise your philosophical muscles in a new possible context. At best, it can prepare you for dealing with what will likely be an astonishing future, perhaps even providing an impetus to try to make your speculative future a reality because you think it would be a desirable outcome. Does being an Objectivist require that you never consider anything speculative at all, ever? That is an honest question (not sarcastic), as that has been my interpretation of a number of comments I've read in this thread and others around the board. I understand you should be serious about life, but never imagining anything beyond a plan for getting some practical thing done seems limiting at best, boring, soul-crushing, and mind-numbing at worst.
  10. Well your first example is apparently wrong. I looked into praying mantis's, since that whole deal never made sense to me. As it turns out, there is strong evidence that that is a result of intrusive study by human beings, not something they usually do in the wild. Plus, I doubt any species of rational beings would be so skittish as to rip of its mates head when someone walks in the room. Oh, and btw, this intelligent mantis' would by right be engaging in copulation by consent, so the killing of one member would not even be an initiation of force in that instance (though that proposal still seems absurd at best). As for the "parasitic pods", you can use cows, apes, etc. for such purposes. Or they would use artificial "pods", since any alien race capable of traveling that distance would also be easily able to build such systems. You don't get to kill or initiate force against other rational beings because you need them to reproduce. You do not have a right to reproduce. You have a right to reproduce so long as you have the consent of everyone involved. So, sorry, try again, I suppose.
  11. I don't need them to be similar to man. Only that they are individual volitional rational creatures/consciousnesses/life-forms. Those are the things which result in the Objectivist ethics and politics. They will have radically different values (say, arsenic, silicon, and electrical power, what have you), they will pursue radically different goals, but they will be acting in their own rational self-interest and must not, by their nature, initiate force if they wish to live as the entity they are. The virtues in Objectivist ethics are the same, the application quite radically different. All that is necessary for Objectivism to apply (the principles, not applications to certain individuals) is that the entity: 1. Is alive, i.e. must continue to perform self-sustained, self-generated action in order to maintain its capacity to do so (life must continue living in order to live/be life). 2. It has a conceptual faculty, i.e. it integrates its percepts of reality using reason through the application of logic. 3. Obviously it must also have a volitional capacity, though I think any life form with a human-level capacity for conception and reason will be volitional (the two probably go together, if only as a result that conceptual faculty creates the concept "I" and volition seems dependent on that, and it seems logical to me that it is in fact a sufficient condition for at least limited volitional capacity, the degree of volition increasing with the acuity of the conceptual faculty). These things are not necessarily only applicable to humans. The above describes anything we would ever call a "person" in the sense of a thing with a mind, a "soul" if you will. The above necessitates all of Objectivist ethical and political principles. It gives you the whole shebang so-to-speak. And so anything broadly defined "life-form" with a volitional rational faculty has its ethics and politics described by Objectivism. That is an important insight. It doesn't weaken Objectivism to say that it can automatically handle future technological developments such as machine consciousness or uplifted animals, etc. So long as the above conditions are met, Objectivism holds. That is a statement of the power of Objectivism, as mrocktor said in his last post.
  12. It is a big project, but the exponential growth in computing probably isn't going to be slowing any time soon. We'll get their eventually. Well I suppose I understand the logical structure of Objectivism as independent of the nature of "man", except for the qualities in my "person." If you are a rational, self-aware volitional life form, it applies to you. So, Objectivism, while based on "man" as in "human", is based on the certain qualities man has that makes him a person (obviously a concept derived from the concept "man"). Perhaps the best way to explain it is that my understanding of Objectivism makes the ethics and politics a simply deductive consequence of the fact that it is discussing a rational, self-aware volitional life form. Humans are such life forms (in fact, the model by which we created the concept of such life forms in the first place, but humans do not exhaust all the possibilities which fit under the concept), so Objectivism obviously applies. I don't see anything in the nature of man which is not contained in the above description which influences Objectivist ethics or politics in terms of principles. Their application will be very different of course, since the context is different, but the virtues of Objectivism and the prohibition of force are a logical requirement arising from the qualities in my idea of "person."
  13. If it is an independent rational entity capable of self-generated self-sustaining action, then it must necessarily follow a rational egoist ethics, because those are the characteristics of man which lead to that ethics in the philosophy of Objectivism. Now, you can claim that it is impossible for that to occur, but that would mean it would be impossible for us in principle to reproduce by design what nature produced through evolution, and any such claim seems dubious at best. Just because man created it does not mean that it cannot attain independent existence. We do it with children all the time, haha. The claims you put forward were not based on any actual projections of current technological trends, but on pure imagination and/or were dependent on massive sustained governmental support (something no one has any reason to believe will occur for much of anything). Not a "possible", i.e. conceivable, context, an imminent and foreseeable one. There are very few situations where this applies, because most extensions of definitions or formations of new concepts occur from an increase in knowledge. That increase is inherently unpredictable. In this case, we can foresee a high likelihood for such things to come to pass in the future. And, I think, it is more meaningful to talk of "persons" and "rational life forms" than to talk of "man" and "human" when it comes to basic principles of ethics and politics. It even helps clear up categories that are odd cases in the case of humans, such as the mentally retarded, fetuses, and the brain dead. It may (though probably does not, in my opinion) muddy up matters for dolphins and the "higher" apes. But since it is a concept which is created so as to serve as a category for all entities of any sort which deserve legal rights of any sort (surely an important thing), I think it serves such a role better than an ambiguous and limited concept such as "man" in ethics. We can easily foresee "man" needing to be dumped for a better, broader concept to define ethical and political principles with. Why not go ahead and do it? Worst case we simply emulate all of those things you just discussed in a computer system. They are physical systems, and with enough computing power it is at least possible to form such an emulation. Best case we really come to understand those systems and create new systems with similar or identical properties for our purposes. It seems that many place certain things beyond human understanding, such as how consciousness arises, or how intelligence arises, etc., even on this board. It is profoundly surprising, since Objectivism is arguably a philosophy which gives man the tools to bring literally everything under his understanding, influence, and control. Betting against human ingenuity, intelligence, and creativity is never a good idea. Especially for an Objectivist. It seems obvious that we will one day do what I described in my previous post (since it does not violate the laws of nature), and given that, why would you not define the terms in your philosophy to be applicable forever, since that is just what the philosophy is supposed to be capable of? The term that best encompasses the nature of man that Objectivism as a philosophy addresses is "person" as I described it or "rational being" as mrocktor described it. "Human" is not what the philosophy is about. That is what its application is about, for humans. But the philosophy applies equally well to any independent rational "living" (in the Objectivist, not biological, definition) entity. The metaphysics is always the same. Regardless of the type of senses you have, the epistemology is the same. The ethics and therefore the politics will be the same (in principle, if not in all their applications, but this is no different than for various people). Objectivism does not change for any entity fitting the above description that may ever exist, including humans. So "human" when defined as identical to "homo sapiens sapiens", independent of any future creations or discoveries, is not a good concept to use in defining the philosophy.
  14. While it is legitimate to say that all rational animals are homo sapiens sapiens, and all "men" or "persons" are homo sapiens sapiens and rational animals, this likely will not stay this way for more than another couple of decades. We will likely develop artificial intelligences as sophisticated as humans, or genetically engineer some great ape specimens with significantly larger brains (perhaps even human-esque vocal cords granting them proper speech abilities), etc. This isn't idle speculation, there is every reason to believe this will occur and in the relatively near future. Considering what ethical and political ramifications this would have is something that has some importance. Now, "rational animal" is a perfectly good definition of "man." But "man" is not necessarily the ideal concept to use in the context of defining a philosophy that will suffice for the foreseeable future. Intelligent beings based on electronics for example will not be included in "man" but would, it would seem, be a being with all of the characteristics necessary to require all of the Objectivist virtues for its continued life, a morality of rational egoism, and a politics with simply bans the initiation of force. Now, its values are likely to be very different from ours, but that does not change the principles of the ethics. Transhumanists (who argue that we should use technology to improve ourselves, i.e. lengthening life, boosting intelligence, eliminating disease, and the conditions of life) have generally adopted a "personhood" definition of rights, stating that any living entity which is self-aware and has the capacity to be rational, is granted all of rights a "man" in the Objectivist sense has (they don't reference Objectivism, obviously, but that is the essential meaning). I generally use the term "person" to identify mrocktor's concept of "rational being" or my preferred "rational entity". For now, "person" is identical to "man" because it refers to the same thing. But it is specifically constructed so as to be applicable to all possible beings that by their nature would need a rational egoist morality and a have "human" rights. It avoids confusion when discussing such things as fetuses or the brain dead, "uplifted" animals or conscious machines, or interactions with alien species. The term "person" makes clear things that "human" or "man" do not: fetuses are not people, but might be thought of as "human" since they have human genetic material (only for someone unclear on philosophy, but as most people are, it helps); the brain dead are not people, they cannot be said to have a "person" in there, there is no conscious entity any longer, so they no longer have rights; a "uplifted" animal (an animal like a chimp or dolphin engineered to have human level intelligence and rational capacities) can be said to be a person, they are self-aware and can communicate, have personalities, pursue values rationally, etc.; an intelligent machine is a person for much the same reason; and an alien species along the same lines. "Person" isolates what one might call the "soul" from the physical form which it springs, forms, resides in, etc. It doesn't muddy any philosophical waters, but instead makes clear that non-homo sapiens sapiens who have the capacity to reason (which will certainly come into existence in this century) have rights as well.
  15. While it is legitimate to say that all rational animals are homo sapiens sapiens, and all "men" or "persons" are homo sapiens sapiens and rational animals, this likely will not stay this way for more than another couple of decades. We will likely develop artificial intelligences as sophisticated as humans, or genetically engineer some great ape specimens with significantly larger brains (perhaps even human-esque vocal cords granting them proper speech abilities), etc. This isn't idle speculation, there is every reason to believe this will occur and in the relatively near future. Considering what ethical and political ramifications this would have is something that has some importance. Now, "rational animal" is a perfectly good definition of "man." But "man" is not necessarily the ideal concept to use in the context of defining a philosophy that will suffice for the foreseeable future. Intelligent beings based on electronics for example will not be included in "man" but would, it would seem, be a being with all of the characteristics necessary to require all of the Objectivist virtues for its continued life, a morality of rational egoism, and a politics with simply bans the initiation of force. Now, its values are likely to be very different from ours, but that does not change the principles of the ethics. Transhumanists (who argue that we should use technology to improve ourselves, i.e. lengthening life, boosting intelligence, eliminating disease, and the conditions of life) have generally adopted a "personhood" definition of rights, stating that any living entity which is self-aware and has the capacity to be rational, is granted all of rights a "man" in the Objectivist sense has (they don't reference Objectivism, obviously, but that is the essential meaning). I generally use the term "person" to identify mrocktor's concept of "rational being" or my preferred "rational entity". For now, "person" is identical to "man" because it refers to the same thing. But it is specifically constructed so as to be applicable to all possible beings that by their nature would need a rational egoist morality and a have "human" rights. It avoids confusion when discussing such things as fetuses or the brain dead, "uplifted" animals or conscious machines, or interactions with alien species. The term "person" makes clear things that "human" or "man" do not: fetuses are not people, but might be thought of as "human" since they have human genetic material (only for someone unclear on philosophy, but as most people are, it helps); the brain dead are not people, they cannot be said to have a "person" in there, there is no conscious entity any longer, so they no longer have rights; a "uplifted" animal (an animal like a chimp or dolphin engineered to have human level intelligence and rational capacities) can be said to be a person, they are self-aware and can communicate, have personalities, pursue values rationally, etc.; an intelligent machine is a person for much the same reason; and an alien species along the same lines. "Person" isolates what one might call the "soul" from the physical form which it springs, forms, resides in, etc. It doesn't muddy any philosophical waters, but instead makes clear that non-homo sapiens sapiens who have the capacity to reason (which will certainly come into existence in this century) have rights as well.
  16. itsjames, in regards to your question about the laws of physics, I've come to "reconcile" the apparent contradiction (not an actual one, obviously, since I reconciled it but it seems to be on the face of it) is that a man and the collection of particles making him up are not the same. Physics deals with particles and systems of particles. Philosophy in regards to men is about entities called "men" and their minds. "Mind" is one way of looking at the collection of particles we call a human brain (and perhaps the rest of the nervous system, etc., we don't really know yet, but you get the idea). Another way is "particle system." Physics describes how the system of particles will evolve in time. But that isn't minds. A mind may be (and is, actually) indeterministic, i.e. that the action someone takes cannot be predicted by knowing the contents of their mind (ideas are something like patterns in the physical system, patterns change radically without any deterministic component, the physical system does not). So a "person", i.e. the mind, is indeterministic, the only explanation of their behavior is volition. The physical system does not have volition and behaves deterministically. But philosophy isn't about collections of particles. It is about people. When I am typing this, yeah, the laws of physics are playing themselves out. But that has nothing to do with the fact that "I" and "talking" with "you". I exist, I am actually real, not a figment of my own imagination (you may one day be able to point to "me" as a pattern in the brain). People live their lives on the level of entities and minds, not the level of particles and fields. To try to live on the level of particles is impossible, as "you" would cease to exist, because you are an abstraction from those particles and fields, a mind. Physics and philosophy talk about different ways of viewing the world. We may be able to one day create minds in computers, using physics, but the "mind" and the computer it exist in aren't the same, the mind will have volition, the computer will not, just like a man has volition but the brain does not. Perhaps I didn't explain that very well, but I think it gets my idea across anyway.
  17. Kainscalla, I understand your position, but I thought it was very well done. There was only that one scene in the lab/house/boatyard that does not fit the "climax"-type description. The fight scene at the gambling ring has support from the stories (he referenced having done that sort of thing before, and he was an expert boxer, etc.). Even the fighting was done in a way which I thought fit very well with the character; he used a great knowledge of anatomy and fighting tactics to predict his opponents moves and find the most efficient way to disable them, with an accompanying estimation of the damage done (a nice touch, I thought, and something Holmes would almost certainly know how to calculate). Not only was the fighting often fairly cerebral, but there were many instances in the story where Holmes showed off his "deduction" chops, including the final climax of the story, where he puts everything together. As a depiction of Sherlock Holmes, the character, I thought it was quite good, and the minor modifications I thought were justified and even good in the context of a movie (as opposed to a short story). As a depiction of a rational person pursuing their passion and justice, it was great. So, overall, it was a very good movie.
  18. Without legal constructions of property rights, property right debates are going to be vague and almost impossible to settle satisfactorily. More importantly, as you pointed out, the Na'vi aren't humans. They are fundamentally different than us. So property right ideas may not even be applicable to them. If all my assumptions our ideas about rights are totally worthless in this situation, then what are we arguing about? If we don't have compatible rights-systems, interaction on any intellectual, rights-based level is going to be impossible. And then the only interaction possible is on the level of force. What ethical principles apply on trans-species interactions? I argue that if they are like humans (have to live by reason), then we treat them as humans. But do the Na'vi have to live by reason? They seem to act more based on instinct and collective communication. So the idea that they must be able to act according to their individual rational decisions is in question. Since that is in doubt, so is the prohibition of the use of force. And property rights, since it is unclear that they need property in order to survive. So your argument about possession, homesteading, etc. is invalid as well, since they aren't humans and may not need any conception of property to survive (and without that, they don't need the Tree). Well, as I argue, if they didn't have property rights to the tree, or if "rights" in any human sense are invalid with them, then it isn't theft and thus is not fascist.
  19. No, actually, it isn't a rhetorical gambit. The statement you quoted was based on my analysis of their property rights to the Home Tree. In my analysis, they said it was somehow "their's" and thus we could not take it. Well, if they don't actually have property rights, then they don't own it, and it is analagous to a guy living off in the middle of nowhere in a hut on property which is not his. In that situation, he has no property rights to the land, and so I can push him off if I wish (provided I gain legal right to it). I certainly can if I've offered him millions of dollars and anything else he can name in order to incentivize him to leave, and I've told him when I'm going to come in to destroy the hut. At that point, his fate is no longer my responsibility, I gave him every warning, and he had no right to the property in the first place. So, no, that wasn't intentionally argumentative, I was stating my actual position. As to your argument that they have rights to the tree, I'm not so sure. If no one can be said to own something, then no one owns it, period. So no one has any rights to it. So I can do anything I like with it. Either some entity owns the Tree, or I can do anything I like, it cannot be something in between. Now, since it is clear that no Na'vi individually owns the Tree, and there is no system of ownership over the tree that we've seen as a collective structure (shares and the like), then either the chieftain can be said to own the tree, or no one owns the tree at all. I question the authority of any chieftain, whatsoever. They likely do not obey objective laws, and they have arbitrary hereditary power exchange. Laws are not codified and written down in any form to be shown to people, etc. That is not a legitimate government, and any claims it makes only have as much weight as the so-called "government" has guns to back its claims up, because without rights and reason on their side, the only thing left is force. And so, since no individual Na'vi owns the Tree, since the tribe collectively cannot own it (since it is not a legitimate government by any stretch of the imagination), and since the humans had a legitimate government and that government gave them mining rights to the whole of Pandora, I think there is a strong argument that the relocation of the Na'vi from the Tree was legitimate. Now, the method was probably more rough than necessary, but considering the fact that the Na'vi would likely have killed the humans if they had done anything less forceful, it is possibly justified, though I am not certain of that by any stretch. Agreed, though they can be regarded as "flea-bitten savages" as the leader of the human operation said, because they have no government, no industry, no legitimate law, etc. The humans were not under a fascist regime. It was a corporation. And the humans had no intention of instituting a fascist regime over the Na'vi, or any sort of regime over them. They simply wanted them to leave the Tree, to keep away from them, and not attack them. Sure, the Na'vi weren't pragmatists, but even so, they were wrong. Adopting human civilization would be better for them then living in forests and trees. They could get all the benefits of their network of trees if they studied it and figured out how to reproduce it in a more condensed manner through technology. This movie is an allegory for the case of the Native Americans. European civilization was assuredly better than their "civilization", and life would have been better they had adopted European ways of life and left their superstitious stuff in the past. Same with the Na'vi. The fact that they refused caused them many problems and a lot of strife.
  20. This is all wrong. The humans on Pandora were attacked first, and defended themselves. It's why they have all those marines, and why they need all those guns: the natives are hostile, and the fauna wants to rip you to shreds. More importantly, their mining activities, up until the destruction of the Home Tree or whatever it was called, did not displace the Na'vi. They just destroyed some plants (and I don't care if they are part of a global network of life, so what? I've got me 20 million dollars a kilo minerals to mine, a few plants can get killed. Any decent "network" has backups anyway). Now, I will grant you that the destruction of the Home Tree or whatever by the people might have been a step too far (I am unsure of the provocation that came before), but given that the Na'vi don't have property rights, it seems, and no formal government (a tribal leader certainly doesn't count). So, they don't own that tree. And they were given fair warning, and offered all the stuff in the world that humans could possibly provide. If I am willing to pay you a million dollars to leave that hut you're in in the middle of the forest in some unclaimed territory, and tell you exactly when I am going to come in with a bulldozer, if you don't leave then it is not my fault that harm may come to you when I come through. You were given fair warning. And the attack on the Tree of Souls was perfectly justified. The Na'vi were, according to them, at war with the humans. The humans weren't going to touch them any more (they'd gotten what they wanted), but the Na'vi were going to launch a full attack anyway. In a war for survival, I can do anything I need to in order to win. If the Tree of Souls had been destroyed, the humans would have been safe. So that attack was fine. The only questionable part of the movie, on the part of the humans, was the attack on the Home Tree, and I think it is at least understandable and partially justified.
  21. Minor spoilers, not much more than reading the rest of this thread plus reviews and seeing the trailer. Well, I thought this movie was a fantastically well executed idea, but the message it sent was one I dislike intensely. The effects lived up to the hype in my opinion (I saw some problems, but I think that was the 3D not the CGI). The acting was good. Writing was good, except a couple instances where the message seemed to get too heavy handed (i.e. "Shock and Awe" "fight terror with terror"). But overall writing was really good. And the world of Pandora was beautiful and interesting. Not sure how it would evolve, but the world seemed well thought out. As for "how is this substance so valuable it can finance all this" etc. criticism: It is stated in the movie that the material, unobtainium (obviously a tongue-in-cheek name for it, even in the world of the movie), is worth 20 million dollars a kilogram. A couple thousand tons and it is equal to the entire gross domestic product of the Earth in 2009. So, it can definitely finance all these operations. Now, the idea/message itself: not so good. I can give it some excuse because the native's religion had some basis (apparently all the trees on Pandora are connected, like a giant computer network or global brain) and the people and animals can tap into it and connect with each other, etc. So, its got some basis. So, there is some basis in their "nature is awesome" attitude. But, that does not mean that it justifies the plot. Humans offer the Na'vi, the aliens, anything they want to move away from their tree-city thing, and they decline (the settlement lies right on top of the biggest deposit of unobtainium for 200km, perhaps as far as they've surveyed, though they don't say that; I presume that going further would be very dangerous, as the local life is quite hostile). Okay, and I understand the humans deciding to just kick them out of the area, after all, they don't have property and are a bunch of collectivists anyway, so whatever. However, they don't give diplomacy much of a chance, and I think their attack is needlessly violent. Anyway, the humans succeed in their endeavor. And then the human-traitor, the main character, decides to lead the locals into battle (totally ridiculous, and pointless, humans got what they wanted, so we won't bother them any more). Humans decide to blow up this sacred tree thing (really important to the natives). How do they do this? A simple nuke? Nope. A gravity bomb, just a big rock dropped from orbit? Nope. They fly a giant-ass pile of high explosives in, and release it in a ridiculously stupid way. Anyway, the humans whoop up on the natives and just smash them to bits, but then the planet's consciousness or whatever makes all the animals attack, and the human warriors die. Then the rest pack up and leave. End of story. Now, this doesn't make any sense. I'm sorry, but humans won't just stop. They just know that they'll have to go bigger in the future. More guns, more bullets, more bombs, and let's just drop a nuke from orbit and take out that tree. Use bio-weapons to wipe out all life within a hundred kilometers of our base. But at 20 million a kilo, humans won't stop. A big war could easily be financed. And we'd almost certainly win. Oh, I forgot something. The human-traitor main character says at one point "their world has no green, they killed their mother" i.e. nature/forests/animals/etc. This is apparently a bad thing. Even though humans now can travel between star systems and have super-advanced technology. So this movie's message was: "Nature is awesome and sacred, man is evil, civilizations slaughter all the natives, corporations are evil, money is evil, let's all go live in the forest, f*** technology." And that is absolutely terrible. At least I have the consolation that in the sequel in my head, the locals get de-stroyed and humans win. And then human civilization spreads, and the locals get to have the benefits of technology (including the fruits of experiments with their neural connection ability). Yay civilization.
  22. No they aren't. Philosophy is not science. Metaphysics and epistemology are not sciences. Perhaps I should be more clear (though it seemed obvious): More attention should be paid to the philosophy of natural science (i.e. biology, chemistry, physics, etc. taken together). When I said, all encompassing I meant as providing for the foundation for the "philosophies" of the various particular subjects (science as one of them). I see very few discussions on this board on topics in the philosophy of science as a special subject, and very few articles/lectures on ARI related sites. There are also few lectures available for purchase on subjects in the philosophy of science. So to say that "Objectivism/Objectivists" taken as a whole do not seem to put much emphasis on it as a topic of discussion, research, and interest does not seem to be a mischaracterization. And it seems legitimate to wish to see what others, particularly major Objectivist thinkers have to say on a subject. If this is stupid or wrong, then there is no reason for anyone to buy any books on philosophy at all, because you should "simply do your own thinking on the subject." Since it is possible to do that, what conceivable purpose can talking with others or reading/hearing what others have to say possibly serve? *sarcasm*
  23. While I do not know about the specifics of what the OP is trying with his proposed theory of scientific explanation, the topic itself is vitally important. Objectivism is a philosophy which is supposed to be all encompassing. Therefore, Objectivism must provide a framework for a philosophical description of the nature of scientific explanation. Since science is of vital importance to rationality, and explanation is (well, depending on your philosophy of science) of great importance to science, if not the whole point of the enterprise, scientific explanation is of great importance to rationality (which is key in Objectivism). I am at least happy to see a discussion of the philosophy of science here. I find it amazing that Objectivism (Objectivists, ARI, etc.) seems to almost totally ignore the subject, even though it is extremely important. Science is the basis of our entire civilization and is key to producing values in our world, and as such seems to deserve far more attention than it seems to get in Objectivist circles.
  24. The story of Hypatia always stood out to me, ever since I read it in Carl Sagan's "Cosmos." I am interested to see the movie, though I am leery of the love-angle they inserted into it. Perhaps that'll be okay. Have you seen it? And is it available in the US?
  25. I cannot predict who. Though, I am unconvinced that any such thing would come to pass. Anyone rising to gain essentially absolute power in the US would, I believe, be deposed in a coup d'e'tat by the military. The military has been drilled that it is subservient to civilian governmental control, but its first and foremost duty is the preservation of our republic. If America is being destroyed, if the government is being dismantled and replaced by the autocratic authority of one person, I do not believe the military (or at least some number of generals) would follow along. Any such dictatorship wouldn't last long, and would be replaced by a military provisional government in short order. What this group does then is an interesting question, which I honestly cannot predict. On the other hand, I am not certain that a dictatorship is a bad thing. We often connect dictatorships with fascists and statists, but it is possible that a the Supreme Leader, or whatever they wish to be called will set up a firm system of laws based on individual rights, and work to enforce this objective system of laws with the full power of the military and police forces, against civilian unrest. This happened in some former Soviet Republics, where the dictator simply declared that the market would be free and enforced it. Obviously these countries haven't done great thus far, but it gives the basic idea of what I am talking about. I haven't ever seen anything about elections or democracy which makes it so much better than oligarchy or a limited suffrage republic or some form of dictatorship, as a matter of principle. In fact, I dare say this country was much better off when suffrage was limited to those with property (I think this is the major reason why the US has been failing over the last hundred to hundred fifty years, not because the vote was extended to all races and both sexes; those are merely coincidence). The "every person gets a vote" principle, in my view, greatly endangers any constitution which has the capacity to be amended, even if the majority required is inordinately large. Much better to limit suffrage to those with property, intelligence, and education, so that they are less likely to be swayed by envious sentiments and blatantly illogical worldviews.
×
×
  • Create New...