Jump to content
Objectivism Online Forum

EthanTexas

Regulars
  • Posts

    15
  • Joined

  • Last visited

Previous Fields

  • Relationship status
    No Answer
  • State (US/Canadian)
    Not Specified
  • Country
    Not Specified
  • Copyright
    Copyrighted

EthanTexas's Achievements

Novice

Novice (2/7)

0

Reputation

  1. I want more than "Just trust Objectivism". Far be it for me to ask questions and try to get a better understanding of the logistics of Objectivism... in the forum called "Questions about Objectivism". I'd like to thank everyone who helped me gain a better understanding for their assistance. I am asking no more questions in this thread, because no more need be asked... And I probably will continue to mostly just lurk about this forum.
  2. I'm not using it to indict nuclear energy, I'm just saying that it would seem that the more technologically advanced humans become, the more potential there is for us to destroy ourselves. The fact is that Chernobyl was a nuclear incident. Precipitated by human idiocy, yes. But do you think there won't be idiots in an Objectivist society? Now, in what way did CFCs not cause a hole in the ozone layer? Maybe you should cite some quality information and enlighten a poor fool such as me. It would be appreciated.
  3. If the idea that "if technology creates a problem, then throw more technology at the problem" holds true, then can we expect an answer from the rapidly increasing incidence of antibiotic resistance in bacteria, even though, by nature, bacteria will continually grow resistant to whatever antibiotic we throw at them? I'm sure inoculation could do something about this, but antibiotics still will need to be used at some time or another... There doesn't seem to be much of a solution. How is non-GM food more dangerous than GM food? I'm not presuming that the state knows better, or that it could or should control the market. I just want to understand how certain issues won't be made worse by giving more freedom to explore them. Well, they're engineering mobile wheat, now. Just kidding. What are you on about? People harm themselves and others all the time for profit. Someone who's only interested in building a personal fortune to last one lifetime, for example, might have no qualms with harming the environment on a massive scale. What about time are you referring to? I wouldn't automatically assume that GM = superior to nature. While a GM plant may have some nice properties for humans to enjoy that the natural counterpart doesn't, the GM plant also may have some dangerous, unforeseen properties that come about as a result of the genetic engineering that are harmful to humans, or other forms of life... Wouldn't the population be sort of a massive experiment for these sorts of GM products? I'm not necessarily saying that this is a bad thing, but experimentation with unproven products on a massive scale hasn't gone very well in the past, and sometimes results in a lot of deaths.
  4. There's quite a lot of debate over GMO. There have been cases of allergic reactions due to the insertion of one gene, from something that one is allergic to, to another organism, that one is not allergic to in its non-GMO form. If someone's market-base consists of people who are interested in buying non-GMO, and GMO starts getting in your fields, you have a problem. It's a question of whether or not humans are likely to destroy themselves. It's not extremely specific; I am merely using one specific example of a genetically modified plant that could pose serious problems if allowed into the wild. Whether or not humanity can be trusted to not destroy itself if given a potentially unlimited (Is it safe for me to say "unlimited"?) capacity to engineer organisms. Nuclear power gave us Chernobyl, CFC's gave us a hole in the ozone layer. The LHC hasn't done enough of anything to be a real threat, and it's pretty well-estimated that it won't do anything. Nanotechnology's a big question-mark. CO2 has been implicated (probably falsely, but that's a different story) in global warming. While none of these things are world-ending, it's true that if we had twenty Chernobyls, things would be terrible, or several more holes in the ozone layer, and massive release of carbon dioxide probably isn't very good either. The whole "evil corporation" myth, at least as far as corporations doing things that aren't in the best interest of the consumer, and may really be harmful to the public at large, in secrecy, for the sake of profit, isn't really a myth. Refer to Enron and Monsanto. Indeed, all GM crops are meant to be better than their natural counterparts... but does that mean that they are, actually, in every way, better? Take drought-resistant wheat, as you said. Suppose it gets out, and it does replace every species of other wheat in the world. Then, any number of things could happen. What if there's a problem with the genetic modification that causes the wheat to produce some chemical toxic to a certain keystone animal, and that animal dies out, or dies in mass numbers, causing ecological damage. There's also the possibility that, if there's no variety, a single plague could be catastrophic. And then there'd be hardly any wheat. I would imagine that it would be very, very difficult for an organism to be engineered so that it's beneficial to humans but also does not present any potential for damage to the environment. What would we end up having? A world where exist only a few genetically modified, human-friendly versions of various plants and animals where hundreds of years ago there were thousands upon thousands? I'm wondering if humans genetically engineering whatever suits us the best won't eventually have catastrophic effects on the planet. It's a safety concern.
  5. Let's assume that it is an issue. I wouldn't necessarily say that it would be a problem if the genetically-modified plant that has x hazardous effect on humans would only become widespread if the trait also conferred a survival advantage on the plant... but all that is necessary for a mutant strain to proliferate is if it doesn't harm the survival of the plant. It might not be rapid, but eventually, the mutation would spread. And genetic engineering in the sense of selective breeding (a la Mendel) is quite different than modern gene-splicing and recombination. I'm not suggesting that the government control the economy and scientific research. I just want to know how this would be addressed in an LFC society, because it is a potentially very large problem. I said anything goes as far as business and products of the mind, which is more or less a definition of laissez-faire capitalism. That's not a literal, anarchistic "anything goes". The deterrent is Capitalism, but once the damage is done, it's done. By the time a suit is filed, the dangerous gene may be out there... after all, a plant's function is to grow and multiply, and it would be extremely difficult, if not impossible, to fully ensure that the "genetic vandalism" could fully be righted. Well, in the US, at least, the issue of GMO crops crossing over into non-GMO, organic farms is a big deal. Farmers have sued, and it hasn't gone well. They should be able to sue, but they can't. Monsanto and the like are a big force here. It's neither, and your snarkiness is unnecessary. I thought it would have been understood when I was specifying minarchism and Objectivism and laissez-faire capitalism that literally anything goes was not what was intended. If literally anything goes, it's not really a society, is it?
  6. The question is fairly simple... How, in a minarchist, Objectivist LFC society in which anything goes, as far as business practices and products of the mind, could we be assured that, in the course of the evolution of genetic engineering and such, the world wouldn't become irreparably damaged by a mistake? For example, the contraceptive corn that has been developed. Whether or not anyone buys it, it is possible that, like other genetically modified crops, even if it's growing, it will eventually end up widespread. There have been many incidences in which organic, non-GMO farmers farming nearby GMO farmers have had GMO crops appear in their fields, one way or another. The problem of genetic pollution and undesirable gene flow is a real issue. Once genes are out there, they can't be reeled back in. The contraceptive corn is just one example of things that could be done with the best intentions, considered rationally, but with which one small mistake could pose a huge existential threat to humanity.
  7. On the opposite end of the spectrum, what about transhumanism? When genetic modification for people becomes available, in a free market situation, wouldn't that result in large numbers of people having to have genetic modifications in order to stay competitive, and this would destroy market competition in a way that would not be conducive to an economically healthy society (or, perhaps, physically)?
  8. Think, those fish that lost their eyes because they never used them. Even evolutionarily, if you don't use something, you start to lose it, slowly. Yeah, I guess you're right in saying that I'm approaching it circularly, but I'm not arguing with circular logic. Humans, as-is, are pretty reliant on technology, and the use of more technology will only make this more so. I'm just taking into consideration the long-term effects of an increasingly industrialized society. That is all. If humans were to become genetically degenerate from long-term reliance on machines to do work for them, I think that'd be a cause for concern for the populace. I'm wondering if LFC would have any sort of inherent safeguard against this, like it does against monopolies, where it simply probably won't happen due to the economic theory. For example, I reckon that physically strong humans will always have a certain breeding advantage, even if only for aesthetics, and that it may be the case that under LFC, weaker humans get bred-out in the course of competition. If anyone disagrees with this, I'd be curious as to why. It doesn't have to do with politics so much as it is the future of the human race. Don't put words in my mouth; I'm certainly not saying that man should be forced out of his technology. That's absurd. What seems anti-reason and anti-life is not questioning and trying to form an educated opinion. I question because I want to know what everyone's thoughts are about these sorts of things. I have my own thoughts, and I could do all my own thinking, but it's always good to get outside perspectives. None of this line of questioning, if you want to call it a line of questioning, has been meant as a criticism of Objectivism... just questions from a newbie to the forums.
  9. I'm talking about genetic drift. If humans are not under selection pressure to have able-bodies, eventually we will experience degeneration. There wouldn't necessarily be any differential reproduction between those genetically predisposed to athleticism and those who are not, as long as pharmaceuticals can cure or keep diseases at bay long enough to reproduce. This is an issue in a LFC society in which technology and efficiency become increasingly more important in the lives of people, and increasingly varied innovation results in increasingly varied fields in which people are increasingly more reliant on technology for their needs and less so on their own bodies. No shortage of energy from the Sun, true; but I wouldn't say that there are enough minerals available on Earth so that everyone can have perfect nutrition if they so pleased. It would be very interesting if we started drawing minerals from space... In case anyone other than me is interested, here're the mineral compositions of Mars and Earth's moon. Depending on how you define overpopulation, we may already be overpopulated, or at least in some areas.. There's definite correlation between population density of urban areas and psychoses, at least in the United States, though I'm well-aware that correlation does not equal causation. I could definitely see where living in a densely-populated area would have a negative effect on one's psychological health, and it would make sense from the standpoint of evolutionary psychology, though I'm aware that a lot of people here don't subscribe to evolutionary psychology. Also, I'd like to ask what the Objectivist take is on the effect of the ELF radiation that's emitted from things like cellphones. Will a free market be likely to reduce the output of radiation, or increase it? I suppose public awareness/opinion would have something to do with it. Or would the radiation end up being not as much of a problem because of medical advances? That could be interesting.
  10. I don't know about you, but I'd consider it to be a problem that, as the human body's strength and athleticism becomes depended on less and less, it'll decline, and may even be selected against. I'd much rather live in a world where humans are becoming physically stronger with the help of technology than a world in which they're becoming physically weaker because of technology.
  11. I meant the world population. The world population is projected to increase by several billion before 2050. Some people say that that's a problem, especially since there are regions with terrible food shortages. I'm sure that's largely due just to socialist regimes keeping out a free market that would encourage better distribution of food, but when those populations get into the free market, will there be enough food for everyone? And will it be quality food? And what will happen to human evolution? Will it be guided primarily by the mind, and the body may just sort of be neglected? Is genetic engineering a viable answer to this?
  12. Interesting. What do you suppose the Objectivist answer to overpopulation would be? That between a free market with a gold-standard currency, no welfare programs, and the availability of Green Revolution technology and medicine, etc., the population would eventually stabilize over time at a manageable number? Thanks to everyone for the help.
  13. No, I'm just asking the question. That's what I was looking for. Thank you. I'm not. I'm just asking a question. The good people here are better-versed in economics than I am. I don't think it's a zero-sum game, nor that the way to become rich is to wrong others. I've read Atlas Shrugged and I've been looking into Objectivism. What about monopolies, though? Some people would say they're a problem. I don't actually think they would be a problem, but I'd rather understand why that is, especially when they're typically reviled.
  14. Say, a company is particularly advanced in its medical technology. No other company is very close to catching up with it. Rather than release the newest and best cure it has available for a given disease (bearing in mind that if it were to do so, the cost of older, inferior cures would drop), it chooses, rationally, to, instead, withhold the new cure, and continue to sell the inferior cure, because there's more money in it. I don't presume that executives would be immune to a disease, though I think it's reasonable to say that if they were to contract such a disease as that they're withholding the cure or treatment for, they'd have it administered to themselves in secrecy. I wouldn't presume, either, that people would be satisfied with shorter lifespans and inferior medicine; just that it wouldn't be very hard for a company to withhold its findings. It should be noted that I'm not arguing against a laissez-faire capitalist system, or that I don't think it's the best. Rather, I'm just learning about why it's the best, and trying to figure if there's anything that could be wrong with it, and how certain things would work out. I like this idea.
  15. How would, in a laissez-faire capitalist society, one be motivated to make the discovery of, for example, a cure for cancer, when it would be much easier just to continue producing less-than-perfect anti-cancer drugs that, while preserving the life and quality of life of those who take them to any degree, will reel in much larger profits? My thoughts would be that one who makes such a discovery could, before administering it, as a term of its administration, have that the recipient sign into a long-term payment contract by which they would pay, in installments, perhaps, money to the curer for as long as they remain cancer-free. How do you suppose things like this would be dealt with? I refer specifically to a cure for cancer, but I'd also like to address any situation in which it would seem that an imperfect job, which would provide job security, would be more profitable. Would the contract/installments idea be viable in order to combat the "job security" mindset with regards to things like this? Thanks. By the way, you can call me Ethan. And I am not a troll. ;o
×
×
  • Create New...