Jump to content
Objectivism Online Forum

Can computers engage in concept-formation?

Rate this topic


Recommended Posts

Possible on what grounds?  How would YOU build the machine?

Here is a question: by what standard can you say whether something is “possible”? Clearly, some things are metaphysically impossible (contradictions such as centaurs and Pegasus), but what proof is required to declare something possible? We know that if something already exists, it is possible, but what if its still in the design or planning stages such as VTOL flying cars and 10Ghz desktop computer chips? What if it's only an idea? What if I’ve just thought of an algorithm for a self-evolving computer program that I think will achieve AI?

Link to comment
Share on other sites

  • Replies 199
  • Created
  • Last Reply

Top Posters In This Topic

Here is a question: by what standard can you say whether something is “possible”? Clearly, some things are metaphysically impossible (contradictions such as centaurs and Pegasus), but what proof is required to declare something possible? We know that if something already exists, it is possible, but what if its still in the design or planning stages such as VTOL flying cars and 10Ghz desktop computer chips? What if it's only an idea? What if I’ve just thought of an algorithm for a self-evolving computer program that I think will achieve AI?

If you have evidence that a particular algorithm can achieve AI, I'd love to see it. And yes, if you have such evidence, you should consider it possible. I doubt you do; I've certainly heard of nothing of the sort.

VTOL flying cars and 10Ghz chips are just extensions of current technology. True AI -- not something that behaves intelligently, but something that actually IS intelligent -- would be a radically new kind of technology. It's not a fair comparison.

Link to comment
Share on other sites

VTOL flying cars and 10Ghz chips are just extensions of current technology.  True AI -- not something that behaves intelligently, but something that actually IS intelligent -- would be a radically new kind of technology.  It's not a fair comparison.

I am not saying it is a fair comparison - rather, I am asking for a standard.

If you have evidence that a particular algorithm can achieve AI, I'd love to see it.

No, but I have a vague idea for one. I wrote it down three years ago, before I knew anything about programming or concept-formation so it’s rather primitive… but I’ve advanced slightly since then. I still have a ways to go before I get down to the coding...

Link to comment
Share on other sites

How about this:

I say it is possible because I do not see how it is a contradiction.

The only things which are impossible are contradictions. If you can show me that AI is a contradiction, than you have shown that it is impossible. The claim that AI is impossible (or that I have no grounds for suspecting its possibility), simply because I cannot yet provide you with a way to make it, is absurd.

Link to comment
Share on other sites

GC: Well, I can only go by what's on the link you provided, and it's not too impressive. As you said, you clearly didn't really understand concepts at that point. Your AI would be comparing symbols without any grasp of what they actually are: to it, "Cats are mammals" is the equivalent of "Grodzunks are plingafs." It'd be the ultimate rationalist. ;-)

Halley: I'm not claiming that AI is impossible on the basis of lack of evidence. Without evidence, one should not make any claim about possibility: one should not say it's possible, nor that it's impossible, but rather should refrain from taking a position.

That said, I do lean toward thinking it's impossible, but for other reasons -- of the sort that I mentioned above, regarding the connection between biology and consciousness.

Link to comment
Share on other sites

I say it is possible because I do not see how it is a contradiction. 

The only things which are impossible are contradictions.

This is not the proper epistemological procedure to arrive at possibility. To consider something as being possible is to identify at least some evidence for it. To show that there exist no facts which contradict the proposition, is not sufficent to say that something is possible.

As Peikoff points out in OPAR (p. 176):

"For an idea to qualify as 'possible,' there must be a certain amount of evidence that actually supports it. If there is no such evidence, the idea falls under a different concept: not 'possible,' but 'arbitrary'."

Link to comment
Share on other sites

Consciousness--at it's most fundamental level--is a survival mechanism.

Which would mean that the consciouss robot, if such is possible, would need to find a way to stop its body parts from degrading. All materials become worn off. If the robot had mechanical parts and they break, the robot would have to find a way to fix it or suffer power failure because it needs to recharge from time to time (similar to us eating food). Is that not a survival mechanism?

By the way, breaking the consciousness into fundamentals, as you did now, can only help a programmer make a consciouss computer. So, if you are correct, we are a step closer to doing it.

I know where you're coming from, source.
So? :confused:
Link to comment
Share on other sites

Anyway, isn’t there already a thread on AI?

Yes there is, but I intended to focus this thread on the question whether AI can be consciouss and if yes whether it should be recognized individual rights.

Link to comment
Share on other sites

However, you've radically over-simplified the problem.  A conscious artifact would be as dissimiliar as can be from computers as we know them today.  So, "if you could program a computer to be conscious"--stop right there; you can't.  That's like saying, "if you could ride a tricycle naked up Mt. Everest, would you be able to see my house from up there?"

I didn't over-simplify the problem, I merely simplified the question. By computer I didn't mean this machine that you're staring at, I meant generally a machine made of electronic parts combined with mechanical parts.

Maybe I should have asked this question: "Can you make a machine into which you could program consciousness, and if yes, do you think that individual rights should apply to it?"

Link to comment
Share on other sites

Correct, Stephen...

I should not call it possible...

Not even metaphysically possible--metaphysically being the qualifier which removes Peikoff's point from the discussion, and the qualifier which I forgot to include in my origional statment.

My main point, however, was that there was not evidence to establish it as impossible either.

That point remains valid.

Link to comment
Share on other sites

My main point, however, was that there was not evidence to establish it as impossible either.

I do not think that that is relevant, since there is no evidence to consider it possible. However, if one were to focus on the issue of impossibility in this context, then instead of blithely pontificating about consciousness with "digital algorithms," one would directly address the evidence presented in the Objectivist and scientific literature which ties the actions of consciousnes distinctively to life. (Some of these references were given previously.)

No one has to agree with that evidence, but a serious approach would directly address it instead of speculating about "digital algorithms." I for one have no real interest in debating this issue with those here, but I think there are a few who would do so if someone actually stepped up to the plate and addressed the evidence.

Link to comment
Share on other sites

I do not think that that is relevant, since there is no evidence to consider it possible.

It is relevent because some people were claiming that AI is impossible (in the sense that there is no way to invent such a thing, that it is metaphysically impossible). I agree that this discussion is entirely a waste of time until someone offers evidence one way or another.

Link to comment
Share on other sites

Which would mean that the consciouss robot, if such is possible, would need to find a way to stop its body parts from degrading. All materials become worn off.

So where is the distinction between animate and inanimate things, source? If all materials wear out and this constitutes the end of the thing that wore out, everything or nothing is alive. Which is it?

If the robot had mechanical parts and they break, the robot would have to find a way to fix it or suffer power failure because it needs to recharge from time to time (similar to us eating food). Is that not a survival mechanism?

It is emphatically not a survival mechanism. Recharging a robot is in no way similar to us eating food. We are alive; robots are inanimate. But apparantly, being alive is a superficial characteristic that has no place in my distinction.

By the way, breaking the consciousness into fundamentals, as you did now, can only help a programmer make a consciouss computer.

A rational mind is constantly working with fundamentals. You can also break the universe down to fundamentals. Does that mean that we can program that into existence too?

So? :rolleyes:

I thought that your posting a question on a public BBS meant that you were asking for other people's views. I was just trying to show that I thought I had grasped the context in which your question arose but thanks for making your indifference so apparent to me.

Link to comment
Share on other sites

I agree that this discussion is entirely a waste of time until someone offers evidence one way or another.

If you want evidence that consciousness requires life, there is only the entire history of biology and neurology. Everything that we know about consciousness (this of course excludes the mind experiments popular in philosophy of mind which tell us absolutely nothing about consciousness) points to that fact. I figured that this was understood.

The burden of proof is on those who claim that consciousness can exist apart from life.

Link to comment
Share on other sites

If a computer could be programmed to be consciouss, could it be called alive? If it could choose its values, acchieve them, if it could own property, earn it and make other consciouss computers, should we call it alive and grant it individual rights?

What would the computer/machine be "conscious" of? Reality? Or that which it has been programmed to perceive?

And how would it comprehend the evidence of its "senses"? By reason? Or by the guidelines of its program?

Then, supposing it can form concepts, how will it determine its "values"? By observing reality and using judgment? Or by operating within the rules set by its programmer?

If by "consciousness" you mean "human consciousness", then I believe that programming a human-type consciousness is a metaphysical contradiction. In order to create such a thing, you would have to not program it.

The epistemological validity of a human consciousness is due, in part, to the fact that it is not determined or programmed by a creator. An artificially programmed consciousness could never truly be certain of anything, not even within a contextual certainty. For, its context would not be its own. It would possess the context set by its programmer. Thus, the "knowledge" of the machine would always be dependent upon the knowledge of the programmer.

That is not a human consciousness, in my book. Nor would I say it is "alive". Rather, I think it would be a very complex robot.

Now, if we could somehow artificially create a consciousness that is not programmed and not dependent upon our own knowledge, that would be something different entirely. But I don't even know if that is possible.

Link to comment
Share on other sites

Post edited by Isaac because people found this passage offensive.

The idea that biological makeup is a logical requisite for personhood can be dismissed quite easily with a hypothetical.

Imagine that there was a way to replicate a human neuron with a very tiny device extremely well. (Note: this cannot be done with today's technology. But there is no logical reason to believe that it cannot be done in principle. And we're talking principles here.) In fact, they are such good replicas that these artificial neurons can be inserted into a human brain, and will start interacting with biological neurons and become a part of the network.

Artificial neurons could be used in cases where a patient lacks the ability to create new neurons, or when parts of their brain have been damaged and cannot be repaired effectively by the body's normal processes. (For example, stroke and alzheimer's patients, or victims of traumatic head injury.) That this would be a huge advance in medical science is without doubt - but it would also raise the interesting question:

What if ALL the neurons in a brain were artificial? Say, for example, a patient loses the ability to grow new neurons due to damage to their brain stem, and they're fitted with a device that deposits artificial neurons into their brain at a steady rate, much like the brain itself does with biological neurons. Eventually, they won't have any natural neurons left.

Will this person stop thinking the day the last natural neuron dies?

I think that's a rather silly conclusion. Are the proponents of the "mind requires biology" thesis prepared to say that it does? How many biological cells are required in order to be a person? One? A simple majority? This slippery slope strengthens the functional account.

Another question: (and I apologize that this is also a rather sci-fi-ish hypothetical, but the technology to do these things, while being worked on in various forms, and theoretically possible, does not yet exist.) What if you took a human brain out of someone's head, and stuck it into a robotic machine that could respond appropriately to the electrical and chemical signals that the nerves send, and feed stimulus back from various mechanical parts back to the brain. So, in essence, the person would have a robotic body, but a fully human brain. They would be able to see, think, act, etc. Would this be a person? Would it have a mind?

Surely, there would be differences between their experience and ours. But that's no reason to conclude that they wouldn't be a person with rights in the fullest sense. They would most likely even have memories of the transition, and they might even be able to describe it somewhat intelligibly to a fully biological person.

(Again, this is a device that would have huge advantages for medicine. Imagine that a person is near-death because almost everything south of their neck has been destroyed in a horrible car accident. Their body won't make it. But their brain might, if given a new home.)

So, what if you put the artificial brain in the artificial body?

Clearly the resulting artifact would be radically different from any computer we've ever seen. In fact, it would be FAR more similar to a biological human than to a desktop PC. It might not even make sense to call it a "computer". It would be capable of making choices and taking actions that its designers never even imagined. It would have emergent properties whose nature could only be determined by inferrence from their behavior - like a human.

But it would also clearly not be biological.

I'm not saying that any computer that exists today can "think" in any meaningful sense. I'm also not saying that any VonNeuman machine will ever be capable of personhood, no matter how powerful it is. Yes, the hardware matters, as Searle would say. But there is no logical reason to say that the hardware must be biological. If an entity can perform the same essential functions as a human--can process information in roughly the same way, can determine the same sorts of things about its environment, etc.--then it is a person, no matter what it's made of. (This is sometimes referred to as "functionalism," or the "quacks like a duck" argument, from the old proverb, "Looks like a duck, walks like a duck, quacks like a duck: it's a duck.")

It is those who would object to this thesis who tend towards mysticism of muscle. Only by viewing Man fundamentally as a pile of meat can one conclude that, without the meat - or more accurately, with a different kind of meat that does the same job, personhood is impossible.

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

One more thing:

If a computer could be programmed to be consciouss, could it be called alive?
If you think that this statement has even a tiny shred of meaningfulness, you shouldn't discuss this topic in public. I'm sorry. It's wrong on so many levels. The meaning of the terms "computer," "programmed," "conscious," and "alive" are not at all obvious in this context.

I reject that question on the grounds that it is arbitrary.

There. Now I think I've probably alienated everyone involved in this thread... hehe <_<

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

Bowzer writes:

Coincidentally, the latest issue of The Intellectual Activist has an article relevant to this discussion. Christian Beenfeldt shows why, if you are of the "machines can or will be able to think" camp that you have to be a materialist.
Christian Beenfeldt does no such thing, nor does he attempt to. You are enormously wrong.

The article shows very specifically that ideas about artificial intelligence based on Alan Turing's philosophy of consciousness are based on materialism, and are therefore false. Nowhere does the author state or imply that artificial intelligence as such is impossible. You have completely misread the article.

And isaac:

well said.

Link to comment
Share on other sites

The idea that biological makeup is a logical requisite for personhood can be dismissed quite easily with a hypothetical.

Didn't you express a distaste for armchair philosophy, Isaac? Nothing is more armchair-ridden than a thought experiment (i.e., hypotheticals).

I'm a software engineer by trade and I am quite aware of how a computer works. I disagree, however, that advanced knowledge of computer engineering is required in order to see the point in discussion here. I think this point can be fully understood by a typical sixth grader.

Will this person stop thinking the day the last natural neuron dies? 

I think that's a rather silly conclusion.  Are the proponents of the "mind requires biology" thesis prepared to say that it does?  How many biological cells are required in order to be a person?  One?  A simple majority?  This slippery slope strengthens the functional account.

I know of these examples and what I am prepared to say is that the more neurons that are replaced in a man's brain, the more chance there is that he will die. By the time you got down to one biological neuron in a man's brain, he would have been dead for quite some time (and since I feel that I have to point this out: this means that he would no longer be concious either). There is no evidence to show that an entire brain can be replaced by circuitry. Yes, there is evidence that we can interact with a living brain through electrical currents but there is nothing surprising about that. We know quite a bit about the action potential and how to stimulate them in a neuron. This does not equate to creating consciousness by means of electrodes. All of these experiments have been dependent on living cells which lends support to my argument.

Again, everything that we know about consciousness points to the fact that it requires a living entity.

Link to comment
Share on other sites

Bowzer,

There's a difference between a thought experiment and "armchair philosophy of the worst sort." A valid thought experiment proposes an "if", and then examines the consequences. Your objection is invalid. You throw out the beginning of the hypothetical, without any justification for doing so.

If you'd like to reject my conclusion, you can show:

1. An artificial neuron that can join a biological neural network is impossible in principle.

or

2. That my conclusions from the hypothetical situation are incorrect.

or

3. That is is not relevant to the question at hand. (IOW, even if it is possible in principle, and my conclusions correct, it doesn't prove the point I'm trying to make.)

What you can't do is say, "Replacing the brain with circuitry kills you."

Well, to be precise, you can say that, and you did. But it proves nothing. It's as relevant as saying, "No, you're wrong, because the sky is blue."

[Edited by Isaac to remove content that some people found offensive.]

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

For the benefit of those who haven't had the chance to read the article yet, I will briefly summarize here. He spends the first half of the article discussing the Turing Test and its errors. He then spends two pages discussing Turing's materialism:

According to Turing the human mind is mechanical through and through.
(emphasis in original)

He then introduces the concept of a "discrete machine." This is a fundamental concept to the functionalist view of consciousness. A discrete machine is a machine that can only be in one of a limited number of possible states. The article then goes on to show that the functionalist argument presupposes materialism. He concludes:

It is...a profoundly mistaken view. It is the attempt to reduce consciousness out of existence, claiming that man's ability to think is nothing but the deterministic movement of a cogwheel, like the "clicks" of a mechanical system. As such, it is simply an expression of the materialism of a 'mystic of muscle'--cloaked in the scientific-sounding terminology of computer theory.

I have no interest in having discussions that I am gaining no value from and I do not reply directly to smear posts. As I am new here, this is one case of learning who is worth my time and who isn't.

P.S.--The full title of the article is "Mindless Intelligence: Machine Thinking and Contemporary Philosophers' Rejection of the Mind" implying a very broad application.

Link to comment
Share on other sites

I was not aware that pointing out an error in your statements constitutes a smear. In the future I'll refrain from contradicting you if it upsets you so.

I also have no desire to discuss whether the article does or does not contain the argument you ascribe to it, since it clearly does not. Yes, it has a broader scope then Turing's ideas alone, but only in that it refers to other forms of materialism. It does not declare that the idea of AI itself is inherently materialistic. I invite those reading this to seek it out themselves, as it is a very insightful article.

Link to comment
Share on other sites

Bowzer,

... You may be a software engineer.  But you haven't demonstrated any great philosophical skill that I can see.  And being a "software engineer" just means that you can program computers - not that you have the foggiest clue about this topic.

First, in an earlier reply to you Bowser noted that he had a degree in "Cognitive Science." In and by itself that is no guarantee for correct knowledge, but at least it does go to his standing in this subject, beyond that of the work of a "software engineer."

Second, since it seems you have reduced this discussion to personal accusations: for whatever it is worth I would say that in his few posts Bowser has exhibited a far superior understanding of philosophy, and he has demonstrated a greater clarity of mind, than the somewhat muddled formulations which I have seen in several of your postings.

Personally, I do not think that such a personal issue matters as to the veracity of the arguments made, but since you brought up a personal assessment I thought I would acknowledge it with one of my own.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...