Jump to content
Objectivism Online Forum

Bowzer

Regulars
  • Posts

    390
  • Joined

  • Last visited

Everything posted by Bowzer

  1. Quick reply, Isaac, I am making an inductive argument (not a deductive syllogism) and as my evidence I am offering up all of our (valid) knowledge about the brain and how it works. I would love to spend hours writing about all of this knowledge (and perhaps some day I shall) but I won't do that here. Simply hit a used bookstore and find a good textbook on neurobiology. Or are you saying that you disagree with fields like neurobiology as I do with cognitive science? That you reject all of the evidence that I am alluding to? I am saying that the burden of proof is on you to show how consciousness can exist apart from living organisms--a conclusion that contradicts (not even is just unsupported by) everything that we know about consciousness.
  2. Causality is the law of identity applied to action. I agree that the brain has the causal powers that give rise to consciousness. Let me state my disagreement this way: there is no evidence to suggest that we can re-create the causal powers of the brain that give rise to consciousness without also re-creating the living processes that go along with them. The identity of the brain includes its nature as a life organ and you cannot exclude this fact from discussions about the brain's casual powers. Brains are organs of living organisms. They are an integrated part of the bodies of living things. As such, there are a slew of processes that must be maintained in order to sustain the brain and its efficacy: blood must be circulated, neurons must be kept in a bath of special chemicals, hormones need to be produced and sent to other parts of the organism, etc. At our current stage of knowledge, there isn’t one iota of evidence that suggests that we can extract just those processes that give rise to consciousness (assuming that we even knew exactly what they are) from the rest of the processes involved in maintaining and regulating a nervous system. To suggest that we can do this is to drop entire fields of knowledge (e.g., neurology, biology, etc.). Some of you say that my conclusion is premature but I disagree. I believe that there is enough evidence right now to be certain of this. I do not accept current “research” from fields like artificial intelligence and cognitive science as evidence for anything. Such fields are patently corrupt due to the philosophies that guide them. So far--short of intuitions and impossible hypotheticals--this is all that has been offered in arguments against my position.
  3. I am arguing for something even stronger than the relationship of consciousness to the brain. I am arguing for its inextricable link to life as such. (I am not arguing that this is part of Objectivism but I definitely consider it to be compatible with Objectivism.) Consciousness is a sub process of life (like digestion or respiration). It exists due to its evolutionary value for those organisms possessing it. Its sole purpose is for value-satisfaction. I still fail to see how you could ever tie the faculty of consciousness to something that is not alive. A conscious machine contradicts what I am saying is the basic nature of consciousness as the mover of living organisms. Ayn Rand pretty much states this explicitly in "Basic Principles of Literature", The Objectivist July, 1968: Machines can have no purpose and I believe that this means that they will never be conscious. The complex robots that make the "conscious machine" scenarios seem so believable would not be conscious. They would just be really interesting machines.
  4. Reason's Ember, that question has been abused to the extreme by philosophers. It has led to a number of nasty and philosophically-loaded terms like "phenomenology," "qualia," "what it is like to be like," etc. It has its roots in Kant's noumenal/phenomenal distinction and it attempts to invalidate all forms of consciousness. Ayn Rand writes: What is the question, "what is it like to be a bat?" asking us to be aware of? An impossibility; a contentless state of consciousness. Awareness is a causal relationship between some fact of reality and a mind (sense organs to be exact). There can be no causal interaction between two forms of perception. This does not negate our ability to know that bats have a form of awareness. More importantly, this does not mean that some things in reality are inherently unknowable. On the contrary, to even speak of such a thing a contradiction (to speak of them you must at least first know of them). In one sense (the most important one), you can know what it's like to be a bat. If you live in an area where they are flying about (mid/late evening), you can go outside and watch them dive for insects. If you see the insects as the bats are locating them for food, you know what it is like to be a bat, i.e., you both have awareness of the same aspect of reality (just in different forms). That is all that you can validly ask in comparing your awareness to a bat's. Of course, philosophers aren't interested in affirmations of the efficacy of mans's mind so they will deny that what I have said is a respone to their "question." If you would like to learn more about Objectivism's validation of the forms of awareness, I highly recommend reading (or re-reading) Chapter 2 of OPAR. AshRyan, I knew what you meant but strictly speaking, a bat's awareness is neither "subjective" nor "objective." Those terms are only applicable to a volitional consciousness. Think of a bat's awareness as the metaphysically-given.
  5. For the benefit of those who haven't had the chance to read the article yet, I will briefly summarize here. He spends the first half of the article discussing the Turing Test and its errors. He then spends two pages discussing Turing's materialism: (emphasis in original) He then introduces the concept of a "discrete machine." This is a fundamental concept to the functionalist view of consciousness. A discrete machine is a machine that can only be in one of a limited number of possible states. The article then goes on to show that the functionalist argument presupposes materialism. He concludes: I have no interest in having discussions that I am gaining no value from and I do not reply directly to smear posts. As I am new here, this is one case of learning who is worth my time and who isn't. P.S.--The full title of the article is "Mindless Intelligence: Machine Thinking and Contemporary Philosophers' Rejection of the Mind" implying a very broad application.
  6. Didn't you express a distaste for armchair philosophy, Isaac? Nothing is more armchair-ridden than a thought experiment (i.e., hypotheticals). I'm a software engineer by trade and I am quite aware of how a computer works. I disagree, however, that advanced knowledge of computer engineering is required in order to see the point in discussion here. I think this point can be fully understood by a typical sixth grader. I know of these examples and what I am prepared to say is that the more neurons that are replaced in a man's brain, the more chance there is that he will die. By the time you got down to one biological neuron in a man's brain, he would have been dead for quite some time (and since I feel that I have to point this out: this means that he would no longer be concious either). There is no evidence to show that an entire brain can be replaced by circuitry. Yes, there is evidence that we can interact with a living brain through electrical currents but there is nothing surprising about that. We know quite a bit about the action potential and how to stimulate them in a neuron. This does not equate to creating consciousness by means of electrodes. All of these experiments have been dependent on living cells which lends support to my argument. Again, everything that we know about consciousness points to the fact that it requires a living entity.
  7. Coincidentally, the latest issue of The Intellectual Activist has an article relevant to this discussion. Christian Beenfeldt shows why, if you are of the "machines can or will be able to think" camp that you have to be a materialist.
  8. If you want evidence that consciousness requires life, there is only the entire history of biology and neurology. Everything that we know about consciousness (this of course excludes the mind experiments popular in philosophy of mind which tell us absolutely nothing about consciousness) points to that fact. I figured that this was understood. The burden of proof is on those who claim that consciousness can exist apart from life.
  9. So where is the distinction between animate and inanimate things, source? If all materials wear out and this constitutes the end of the thing that wore out, everything or nothing is alive. Which is it? It is emphatically not a survival mechanism. Recharging a robot is in no way similar to us eating food. We are alive; robots are inanimate. But apparantly, being alive is a superficial characteristic that has no place in my distinction. A rational mind is constantly working with fundamentals. You can also break the universe down to fundamentals. Does that mean that we can program that into existence too? I thought that your posting a question on a public BBS meant that you were asking for other people's views. I was just trying to show that I thought I had grasped the context in which your question arose but thanks for making your indifference so apparent to me.
  10. I'm a radical optimist when it comes to computers; I believe firmly that the technological revolution will enhance our lives to a greater extent than even the industrial revolution did. I am ceaselessly taken aback by what computers do for us every day. That said, I also know that computers will never be conscious. I know where you're coming from, source. Given the state of academic fields today--fields like cognitive science, artificial intelligence and (god save us) philosophy of mind--it is understandable that a question like this would arise. Computers are anthropomorphized and ascribed characteristics of consciousness (e.g., "information processing", "learning", memory", etc.) in pretty much every theory out there. Even basic computer textbooks make this mistake. If you know the term from Objectivism, stolen concepts are found in abundance. We know that a program will never make a machine conscious because of what we know about the nature of consciousness. Consciousness is a teleological function of living organisms. Consciousness--at it's most fundamental level--is a survival mechanism. This is just as true if you are talking about a rat as it is if you are talking about a human. On the other hand, it will never be true of computers which will have no need to act. You cannot instill the fundamental alternative of life or death into a machine no matter how complex the program. I suggested some readings in this thread.
  11. Objectivism: The Philosophy of Ayn Rand is barely over $10 new.
  12. I have a degree in Cognitive Science, Isaac, and I have to say that you have stated its position very well. I completely reject that view and it doesn't just come down to "it HAS to be alive." If you are interested in why Objectivism rejects such a view, I refer you to Ayn Rand's "The Objectivist Ethics" and Harry Binswanger's October 1986 article "The Goal-Directedness of Living Action" (The Objectivist Forum). P.S.--I forgot an important section from OPAR ("Objectivism: The Philosophy of Ayn Rand" by Leonard Peikoff). See "Life as the Essential Root of 'Value'" pp. 207-213 hard cover ed.
  13. You can't divorce the process of concept-formation from consciousness and you can't divorce consciousness from life. Computers are neither conscious nor alive.
×
×
  • Create New...