Jump to content
Objectivism Online Forum

Can computers engage in concept-formation?

Rate this topic


Recommended Posts

I was not aware that pointing out an error in your statements constitutes a smear.  In the future I'll refrain from contradicting you if it upsets you so.

I also have no desire to discuss whether the article does or does not contain the arguments you ascribe to it, since it clearly does not.  Yes, it has a broader scope then Turing's ideas alone, but only in that it refers to other forms of materialism.  It does not declare that the idea of AI itself is inherently materialistic.  I invite those reading this to seek it out themselves, as it is a very insightful article.

Amagi, Bowzer's statements were in reference to Isaac's posts, not yours. And in my judgment, he is absolutely correct in his assessment of Isaac, as is Stephen Speicher, whose post is a good summary of some of his (Isaac's) problems.

To make it even more clear, allow me to point out just a few of his more blatant contradictions. Isaac brings up the issue of the necessity of specialized knowledge in this context:

The objection that an artifact can never do what it was not programmed specifically to do belies a critical lack of knowledge. Those who haven't studied computer engineering should not make strong claims about what can and cannot be done with information processing devices. That's armchair philosophy at its worst.
Then he claims that such knowledge doesn't matter, and that Bowzer's statement that he has some knowledge of the field is irrelevant:

You may be a software engineer. But you haven't demonstrated any great philosophical skill that I can see. And being a "software engineer" just means that you can program computers - not that you have the foggiest clue about this topic. In fact, citing that as if it matters is really a pretty arrogant non sequitur, a form of argument from intimidation.

He also claims that it is an argument from intimidation, but Isaac's own post is much closer to such an argument than anything Bowzer said.

He also denounces "armchair philosophers," then demonstrates himself to be one. So far, he has not demonstrated that he has the foggiest clue on this topic--only gone around telling everyone with whom he disagrees that they don't.

This is a warning Isaac: this type of behavior will not be tolerated on this board. If you persist in it, you will be asked to leave.

Link to comment
Share on other sites

  • Replies 199
  • Created
  • Last Reply

Top Posters In This Topic

AshRyan says:

Amagi, Bowzer's statements were in reference to Isaac's posts, not yours. And in my judgment, he is absolutely correct in his assessment of Isaac, as is Stephen Speicher, whose post is a good summary of some of his (Isaac's) problems.
Ah, then I offer my sincere apologies to Bowzer. The rest of Bowzer's post dealt entirely with my comments on the TIA article, so I assumed the smear remark was directed at me as well. I suppose I might be overly sensitive to this sort of thing, as this would not have been the first time I have been accused here of making personal attacks after merely contesting a point of logic.

And for the record, when I endorsed isaac's statements earlier, it was only in reference to, and I had only read at that point, his "artificial neuron" example, which I think is valid. I agree that he made unacceptable personal accusations, and that some of his other arguments are logically unsound.

Link to comment
Share on other sites

Isaac,

Your thought experiment is just a complicated way of saying what you've been saying all along: that you don't think there's a fundamental difference between biological organisms and inanimate objects. But we've given reasons why there IS a fundamental difference. Given this, there can be no justification for simply stating, in effect, that you have an "intuition" that a brain composed of mechanical neurons would be the same as an organic one.

Fact is, we don't know much about the relationship of consciousness to the brain. As such, we simply don't know what would happen if we change the brain in novel ways.

Link to comment
Share on other sites

I am arguing for something even stronger than the relationship of consciousness to the brain. I am arguing for its inextricable link to life as such. (I am not arguing that this is part of Objectivism but I definitely consider it to be compatible with Objectivism.)

Consciousness is a sub process of life (like digestion or respiration). It exists due to its evolutionary value for those organisms possessing it. Its sole purpose is for value-satisfaction. I still fail to see how you could ever tie the faculty of consciousness to something that is not alive.

A conscious machine contradicts what I am saying is the basic nature of consciousness as the mover of living organisms. Ayn Rand pretty much states this explicitly in "Basic Principles of Literature", The Objectivist July, 1968:

Life is a process of action. The entire content of man's consciousness—thought, knowledge, ideas, values—has only one ultimate form of expression: in his actions; and only one ultimate purpose: to guide his actions.
Machines can have no purpose and I believe that this means that they will never be conscious. The complex robots that make the "conscious machine" scenarios seem so believable would not be conscious. They would just be really interesting machines.
Link to comment
Share on other sites

About the artifical neuron issue,

Bowzer writes:

I am prepared to say is that the more neurons that are replaced in a man's brain, the more chance there is that he will die.

.....

There is no evidence to show that an entire brain can be replaced by circuitry.

I think this misunderstands the issue. The hypothesized artificial neurons would not consist of mere circuitry. Circuitry suggests the sort of electronic computer devices we are presently capable of constructing. But, whereas any electrical circuits we might produce in the foreseeable future could only crudely mimic specific attributes of a cell, the point of the idea is that the artificial neurons would perform all, or nearly all, of the functions of a nerve cell.

It is theorectically possible to create such a "neuron machine" (given awesome scientific advancement) because every cellular function is the result of mechanistic activity in the cell.

The cell is a machine. That this machine happens to include parts composed of organic molecules like DNA and proteins, or that it functions through "wetware" components--liquid solutions--does not change its status as a machine.

Whether or not consciousness could arise in a computer based only on circuitry is a separate issue, and in this case the answer is irrelevant--the point is that some sort of device could be constructed that carries out the specific processes in living creatures that creates consciousness, even though that device might not be a biological organism.

That is why it is irrational to dismiss the possibility of AI simply because, as Bowzer says:

everything that we know about consciousness points to the fact that it requires a living entity
This ignores the fact that it is not just "life" as such that gives rise to consciousness, but some particular physical process in the brains of living organisms. Bowzer's view treats consciousness as if there was some metaphysical rule that only living things can acheive it, while ignoring the question of how, scientifically, consciousness actually happens in those living creatures. And if we consider that question, we have to conclude that consciousness is not just metaphysically granted to organisms, but that it is the result of a definable process in the brain.

There is no reason to assume that that process, or a process with the same result, cannot exist without the peripheral properties of life like growth and reproduction.

Bowzer, consciousness could only have evolved as a result of life's need for value-satisfaction--there is no reason we cannot now design the same process without the evolutionary imperatives. (Besides, it's ridiculous to assert that a conscious computer could have no interests merely because it might not be able to naturally "die"--it could still be destroyed)

Link to comment
Share on other sites

Flame wars are boring, and I'm not king of this castle. I'll play by the rules. I promise.

I know I'm not particularly popular here. But I'm not running for office. I'm under the impression that people come to the M&E forums to debate. If I'm mistaken, and this is supposed to be light-hearted chat about Ayn Rand's philosophy, or Q-and-A where the correctness of Objectivism is not supposed to be challenged, then please let me know, and I apologize for cluttering up your board.

If anyone does indeed spot any flaws in my reasoning, or illogical arguments, I'd very much appreciate if you can give me an example. It seems that there are a few who routinely comment on "Isaac's illogical arguments", but for the life of me, I haven't spotted any errors.

Sorry for the many posts in a row. There was a lot to respond to, and I figured this was better than one big novel.

Isaac

Link to comment
Share on other sites

to work out a sequence of operations to be performed by (a mechanism)
It is incorrect to state that all computers are only capable of performing those actions specifically instructed to them in advance by human creators. Some computers are programmed (in advance by human creators, of course) with a methodology for creating new instruction sets in response to future information, unforseen by the programmer.

The objection can be raised that such a device still only follows the methodology that is built into it. The same argument can be made about the human brain - that it only can do the things that genes give it the ability to do. It's as invalid for the first case as it is for the second. If an entity is developing new information as a result of interaction with some outside environment, then surely we can't say that the new information is part of its "programming."

This doesn't prove much. But it does prove that "Machines can't ever do what they weren't specifically told to do" is bogus. There may be other objections to GOFAI, but that ain't gonna do it. It's just "Yeah, but it STILL wouldn't REALLY be a person!" with a bit less of a foot-stomp.

Link to comment
Share on other sites

At this point in time, there are a lot of unanswered questions about the functioning of a human brain. The current state of epistemology and philosophy of mind is even worse, leaving plenty of huge unanswered questions regarding the functioning of a human mind. As a direct result, the best artificial intelligence models that exist are (at best) prototypes of things to come.

A few clarifications.

"Arm chair philosophy of the worst sort" = making claims about a scientific field while making assumptions that are disproven (or at least, not at all obvious) within the context of that field. The use of a hypothetical situation to prove a point or raise questions does not show that one is engaging in invalid armchair philosophy. It is a sound and useful philosophical tool.

I'm using "possible" below to imply advances in technology, advances that I believe are likely to occur, eventually.

The form of this hypothetical is:

P = A human's brain may be replaced with a collection of artificial neurons one by one without their body dying.

Q = A person exists with an artificial brain.

R = A human's brain may be put in control of an artificial body that keeps it alive.

S = A persons exists that is a human brain in an artificial body.

T = A wholly artificial person is possible in principle.

p1 If (hypothetically) P, then Q.

p2 P is possible in principle.

.: c1 Q is possible.

p3 If (hypothetically) R, then S.

p4 R is possible in principle.

.: c2 S is possible.

p5 If Q and S are possible, then T.

.: c3 T.

You can rebut this argument by making the following claims (and backing them up with evidence or by reduction to any other commonly held propositions.)

!p1: Such an entity may continue to breathe and eat, but it would no longer be a person.

Big question raised:

If the replica neurons do every job that was performed by the original neurons, then it appears that you're saying there is something to "being a person" that is not dependent on the actions of the brain. What are those other qualifications? What do we do that makes us people, which is not some functions of a human neural network? (If you really mean to say that the replacement itself is impossible, scroll down a bit. Thats !p2.)

(If the rebuttal takes the form of the "automaton" argument - that the entity would continue to act and talk like a human, maybe even well enough to fool those around it, get and keep a job, etc. But it wouldn't really be a person...)

How could an entity possibly get by "in disguise" as a human, living in a human society, without similar information processing skills as a human? Driving a car, having a conversation, getting food, these are very non-trivial tasks. If it had a conceptual epistemology, (or, even a conceptual "pseudo-epistemology") doesn't it stand to reason that it is a person? If not, then how could it effectively survive in our society? (I hold a very low opinion of the automaton argument. It gravely underestimates just how difficult those tasks are, and thus, the requirements for performing them. !p2 is a much stronger argument.)

!p2

It is not possible, and never will be, to replace a neuron (or many) successfully with an artificial replica.

Big question raised:

They said that about the artificial heart, too. As we learn more about the human body, we become more and more able to build parts that can function as replacement organs. We're already well on the way to artificial replacement eyes. As technology gets smaller, and research on the human nervous system progresses, it stands to reason that we may eventually be able to replicate a neuron. Yet you say we can't. Why not?

!p3

If a person's brain was put in control of an artificial body, then the resulting entity would not be a person.

Big Question:

Slippery slope. I would still be me if you were to replace my hand, Luke Skywalker-style. Replacing the entire body (except for the brain) is only different in degree. Artificial hearts exist, and other organs can be replaced by machinery successfully. In experiments with lamprey, the resulting cyborg behaved as a newborn lamprey (went towards light by instinct, given the means to do so.) What reason is there to conclude that if the same could be done with a human brain, the resulting entity would no longer think and act like a human? (We can talk about how his experience might change, and it certainly would be shocking, I'm sure, but that's another discussion.)

!p4

It is not possible, nor will it ever be, to successfully put a human brain directly in control of an artificial body.

Empirical evidence doesn't back this one up, I'm afraid.

http://www.sciam.com/article.cfm?chanID=sa...0FB809EC5880000

http://www.smpp.nwu.edu/~smpp_pub/RegerEtA...e6p.307-324.pdf

http://www.businessweek.com/2000/00_12/b3673025.htm

http://news.bbc.co.uk/1/hi/health/3254636.stm

http://www.azom.com/details.asp?ArticleID=1544

The technology's got a long way to go. But it's clearly on it's way, and so far, the results have been encouraging.

!p5

One can be a person with a human body or a human brain, but without either, it's no longer a human.

What is there to "being a person" which is not some facet of the body and the things it does? By asserting Q, you're saying that a person may lack a biological brain. By asserting S, you're saying that a person may lack a biological body. What jobs are performed by the brain which can equally be performed by the rest of the body? Why are these essential to personhood?

Slippery slope: How many biological cells must one have before one is no longer a human? Why doesn't it matter what these cells are? Is skin enough? Teeth and tonainls?

!c1, !c2, !c3

I believe that my logical structure is valid. If you disagree, please be so kind as to point out my error. (At least someone reading this will thank you for making me look stupid, I'm sure. :lol: ) If you agree that my structure is valid, and you accept my propositions, but not my conclusion, then I really have nothing to discuss with you.

Last but not least,

"You're hypothesizing too many things which I'm not sure will ever happen. It's too wacky. I'm unconvinced. Try again."

Fair enough. Strike all that I said from the record, and I'll work on another angle.

Link to comment
Share on other sites

The neurons implanted could, and have been, simply cells which have been immortalized, capable of reproducing themselves outside of the body and without the same provisions against dividing. Neurons can be transplanted in a way very much similar to stem cells, basically as surplus tissue which can be made to cooperate in neural operations. In the past however there have been attempts to add or transplant neural tissue, much like a hard drive between two computers, but this has been mainly unsuccessful at instilling knowledge or memory in different organisms. There does not seem to be any reason why a pastiche thinking organism could not be created given enough neural tissue which was directed in the proper way.

Link to comment
Share on other sites

Your thought experiment is just a complicated way of saying what you've been saying all along: that you don't think there's a fundamental difference between biological organisms and inanimate objects.
It is theorectically possible to create such a "neuron machine" (given awesome scientific advancement) because every cellular function is the result of mechanistic activity in the cell.
Matt, amagi's statement here is extremely relevant to your objection. I'm not, and I don't believe that I have been all along, saying that there is no fundamental difference between inanimate objects and biological organisms. In fact, I've been making every effort to point out that an artificial person would be much more akin to biological organisms than to inanimate objects, in all of the ways that inanimate objects are fundamentally different from biological organisms.

Perhaps you would be so kind as to show off a little, and point out those differences so that we may examine them in further detail. (Since you seem to believe that I'm making an error on this point, I assume that my explanation of the principles involved would be suspect to you, yes?) If, as you imply, the sort of thing that I've hypothesized would be properly classified as an inanimate object, then I'm making an error, and I'd like to correct it. In any event, it's clearly not obvious that an artificial person would be an inanimate object. (If it were obvious, we'd be in aggreement on this point.)

This objection resolves to !p2, in my post above.

Isaac

Link to comment
Share on other sites

Then why bring it up? The issue is goal-directedness, and its philosophical and scientific meaning in regard to computers.

Because,

"Machines are programmed to do only what we tell them to do. Therefore, they're fundamentally incapable of goal-directedness."

None of the people who designed and programmed Deep Blue could have beaten Kasparov in a game of chess.

Even though it may not be goal-directed or capable of self-generated action,
it is an example of a machine who's bahavior is radically outside the realm of things that the programmers could have forseen.
So the "machines only ever do what their programmers intend" is not a true proposition, and the "argument from human intent" is unsound.

Perhaps "radically" is a bit extreme, since the programmers certainly foresaw that it would play chess really well, but if Kasparov couldn't do it, then they most likely could not have forseen Deep Blue's moves.

Link to comment
Share on other sites

So where is the distinction between animate and inanimate things, source? If all materials wear out and this constitutes the end of the thing that wore out, everything or nothing is alive. Which is it?

Alive is that which tries to prevent that which it is constituted of, from degrading.

It is emphatically not a survival mechanism. Recharging a robot is in no way similar to us eating food. We are alive; robots are inanimate. But apparantly, being alive is a superficial characteristic that has no place in my distinction.

I'm not talking about you recharging a robot, I'm talking about a robot recharging itself. Robots are inanimate - now. But now I'm asking you that if a robot was programmed to draw conclusions from information you feed it (through senses/sensors), to mimic emotions, to care for itself, to say "ow" when it gets damaged, to grow tired, to laugh at jokes, etc. would you be able to make a distinction between that robot and a living being? How? Just because when you open it or damage it you'd see non-organic parts? Or would it be because it was "produced," and not born?

A rational mind is constantly working with fundamentals. You can also break the universe down to fundamentals. Does that mean that we can program that into existence too?
There are very good simulations of parts of reality which are based on today's knowledge, which is but a rough approximation of reality. As this approximation becomes finer, better simulations can be made. They'd require better processors, larger hard-drives, enormous memories, but in the end it may be possible to make an artificial mini-universe. I think that to program the universe as big as the real one, and as detailed, you'd need a computer the size of the real universe. But approximation is a good way to make a good image of the universe.

I thought that your posting a question on a public BBS meant that you were asking for other people's views. I was just trying to show that I thought I had grasped the context in which your question arose but thanks for making your indifference so apparent to me.

You were remarking that you knew where I was from. That doesn't sound like a view to me. And I'm repeating my question, "So what?"

Link to comment
Share on other sites

Isaac,

To me, saying that a person cannot be programmed on some sort of computer, that there can be no artificial personality, is like introducing the whole mind-body dichotomy again. It is like saying that humans have souls which are "trapped" in the material body which enables them to interact with the universe and can exist without their physical representation.

There is a combination of matter in my mind and my body which makes it possible to me to be aware of the things that surround me, of my own body, even of the thoughts I myself think. Why could that combination of matter not be reproduced elsewhere in an artificial way and without copying (cloning)?

Computer programs are based on "true" and "false". There is tangible hardware and intangible software. In regard to humans, there is tangible body and intangible soul. Saying that computers can't then have such software as to be able to at least mimic, if not fully reproduce, human behaviour, is saying that human soul functions on something else, which is other than "true" and "false". Well? What else is there?

Link to comment
Share on other sites

Causality is the law of identity applied to action. I agree that the brain has the causal powers that give rise to consciousness. Let me state my disagreement this way: there is no evidence to suggest that we can re-create the causal powers of the brain that give rise to consciousness without also re-creating the living processes that go along with them. The identity of the brain includes its nature as a life organ and you cannot exclude this fact from discussions about the brain's casual powers.

Brains are organs of living organisms. They are an integrated part of the bodies of living things. As such, there are a slew of processes that must be maintained in order to sustain the brain and its efficacy: blood must be circulated, neurons must be kept in a bath of special chemicals, hormones need to be produced and sent to other parts of the organism, etc. At our current stage of knowledge, there isn’t one iota of evidence that suggests that we can extract just those processes that give rise to consciousness (assuming that we even knew exactly what they are) from the rest of the processes involved in maintaining and regulating a nervous system. To suggest that we can do this is to drop entire fields of knowledge (e.g., neurology, biology, etc.). Some of you say that my conclusion is premature but I disagree. I believe that there is enough evidence right now to be certain of this.

I do not accept current “research” from fields like artificial intelligence and cognitive science as evidence for anything. Such fields are patently corrupt due to the philosophies that guide them. So far--short of intuitions and impossible hypotheticals--this is all that has been offered in arguments against my position.

Link to comment
Share on other sites

Bowser,

It sounds like this is your argument, in a nutshell. Correct me if I'm missing something.

1. Brains are biological.

2. It is impossible to do what a brain does without being biological.

.: A (non-biological) artificial brain is impossible.

You say that you believe that there is evidence available right now to confirm this. I believe that your second proposition is specious and untrue. If you could provide some of the "available evidence," or at least a strong argument for this based on mutually held propositions, I'm sure we'd all appreciate it.

I do not accept current “research” from fields like artificial intelligence and cognitive science as evidence for anything.  Such fields are patently corrupt due to the philosophies that guide them.
It's easy to say that there's a problem. It's much harder to point out exactly what that problem is, and even harder to come up with a solution.

Personally, I believe that you're right, to an extent: a lot of work in cognitive science and artificial intelligence has been short-circuited by bad philosophy. (please excuse the pun!) However, that does not necessarily mean that all the research done in these fields is bogus or useless. I'll grant that a lot of time and energy has been wasted by people who really don't understand what a mind is, which could have been put to more productive use. But you're going to have to show what those errors are, and that they are a fundamental problem with the topic as such, if you're going to make a case for writing off the entire field.

Here's a similar example, where the error is more apparent:

Many or most doctors are altruists.

Research done by those accepting a philosophical error is suspect.

Altruism is a philosophical error.

.: We can't trust anything doctors tell us about medicine.

Altruism may be a philosophical error, and it may be the case that many or most doctors are altruists. However, it's not clear that altruism will throw off the conclusions that one draws in the field of medicine. Furthermore, even if the error in question does affect the conclusions, it's not clear that ALL doctors are altruists - therefor, we've only shown that one must investigate claims made by doctors carefully, with our philosophical radar on active ping.

Isaac Schlueter

http://isaac.beigetower.org

Link to comment
Share on other sites

Also, I would like to point out that some of the research in cognitive science and artificial intelligence have already produced useful technology. That doesn't prove that the theories are valid - but it does strengthen them considerably.

Link to comment
Share on other sites

Bowzer:

I agree that the brain has the causal powers that give rise to consciousness. Let me state my disagreement this way: there is no evidence to suggest that we can re-create the causal powers of the brain that give rise to consciousness without also re-creating the living processes that go along with them.
Correlation doesn't equal causality. You're assuming that it does.

Even if one accepted your premise--that there's no evidence to suggest what you describe (and I don't accept it)--it's wildly illogical to then conclude "therefore it can't happen. All you can say is that we can't be sure that it can. You continually state that you're certain that consciousness can't exist without "life." But the only argument you've offered is that there is a correlation between life and consciousness. Therefore your "certainty" is baseless.

Link to comment
Share on other sites

I believe Bowzer is correct. Saying that one can create consciousness without life is like saying one can create gravity without matter. Granted, one might be able to generate a force (like magnetism) that behaves like gravity, i.e. a force that is inversly proportional to the square of the distance, directly proportional to mass, etc., but what you then have is not gravity, but a mimic of gravity.

Developing a mimic of consciouness is going to be many, many orders of magnitude more difficult -- and in the end it will still be just a mimic.

Link to comment
Share on other sites

I do not accept current “research” from fields like artificial intelligence and cognitive science as evidence for anything. Such fields are patently corrupt due to the philosophies that guide them.

A really interesting exercise is to identify the specific connections between the "harder" scientific journals that publish in this area -- such as Brain and Cognition, Neural Networks, Consciousness and Cognition, Proceedings of the National Academy of Sciences, Brain Research Reviews, etc. -- and the more explicitly philosophical ones -- such as Philosophical Perspectives, Social Studies of Science, The British Journal for the Philosophy of Science, The Philosophical Quarterly, etc.

I regularly read these journals and, if you were to look at them side by side, you literally can connect the absurd philosophical notions with the absurd science. There is a one-to-one correspondence between the two. This is a real shame, but not only for the obvious reasons. There actually are a few good researchers in the field -- I know this personally, first-hand -- and out of context there is some decent work that is done. But, even so, I completely agree with Bowzer; the field is so philosophically corrupt that even out-of-context knowledge loses its value among the overwhelming din of pure nonsense.

So far--short of intuitions and impossible hypotheticals--this is all that has been offered in arguments against my position.

I agree. And, I might add, these sort of distorted arguments are heard quite often from those who dabble in this field. Properly knitting together philosophy and science requires an in-depth knowledge of both, and a proper epistemology does not consist of simply deducing the latter from the former. Pontificating about ideas is not the same as experimental science, and experimental science without proper ideas is of no more value than is pontification.

Link to comment
Share on other sites

Can someone please define the word "consciousness" for the purpose of this discussion?

It seems to me that consciousness is a sort of information processing. The question is: is it logically possible that the same sort of information processing can be done by something other than a biological organism? Why or why not?

First, though, we must define what we're talking about, or we're wasting electrons on this.

Link to comment
Share on other sites

Quick reply, Isaac, I am making an inductive argument (not a deductive syllogism) and as my evidence I am offering up all of our (valid) knowledge about the brain and how it works. I would love to spend hours writing about all of this knowledge (and perhaps some day I shall) but I won't do that here. Simply hit a used bookstore and find a good textbook on neurobiology. Or are you saying that you disagree with fields like neurobiology as I do with cognitive science? That you reject all of the evidence that I am alluding to?

I am saying that the burden of proof is on you to show how consciousness can exist apart from living organisms--a conclusion that contradicts (not even is just unsupported by) everything that we know about consciousness.

Link to comment
Share on other sites

Can someone please define the word "consciousness" for the purpose of this discussion?

Isaac, you either have not read the basic Objectivist material or you completely disagree with it. If it is the former, then I would be happy to provide some references. If it is the latter, then please understand that this is an Objectivist discussion group and most of us come here to debate with others who 1) have a basic understanding of the philosophy and/or 2) are making an attempt to understand the philosophy.

I am not an admin and I'm not trying to get you in trouble but you must have read the forum rules before you registered to post here. Did you at least check into the Objectivist literature before posting your question? Do you have any interest in actually learning about Objectivism in posting here? Please be sincere.

Link to comment
Share on other sites

...So the "machines only ever do what their programmers intend" is not a true proposition, and the "argument from human intent" is unsound.

Fine. The programmer's exact intentions are not crucial here. Change it to "machines only ever do what they are programmed to do" and we still have an argument that you have not refuted.

Link to comment
Share on other sites

Cells only ever do what they are programmed to do.  Programmed by DNA--a "digital" code every bit as deterministic as computer code.  Yet cellular life gave rise to consciousness.

Can someone explain that one to me?

It happened the same way that non-living chemicals combined to form living entities. Life is an emergent property of some non-living chemicals arranged in a particular way.

Likewise, consciousness is an emergent property of some living things.

Likewise, volitional consciousness is an emergent property of some conscious living things.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...