Jump to content
Objectivism Online Forum

Can man mechanically recreate consciousness?

Rate this topic


LadyAttis
 Share

Recommended Posts

What is your point?

Physicists who write those books often mention how quantuum mechanics and classic mechanics contradict each other. She wanted references on where I heard that statement in particular said by a physicist, so I gave her some, the rest of the actual content of the book notwithstanding.

Link to comment
Share on other sites

  • Replies 172
  • Created
  • Last Reply

Top Posters In This Topic

Only in the classic physics models we get the UV death radiation burst. Max Planck normalizes the issue by only allowing light to come in specific frequencies and specific amounts of energy. :P

That's because classical mechanics are not based on few things QM is which is discrete units. In Classical Mechanics, spacetime and all energy can form in arbitrary amounts. And even interactions can vary in the amount of spacetime and energy they use, which is absurd. If system A has the same particles as system B they behaviors should follow a similar path. But under Classical Mechanics, their behaviors can be totally random! This is the error of Classical Mechanics not the error of Quantum Mechanics. You're sorta taking Einstein's position on QM here, and that hasn't ever been resolve. :(

That's because of what I said before, Classical Mechanics measures and so on are arbitrary.

String Theory doesn't really even fit the term theory. It's more hypothesis. It takes already proven premises and then uses something unverified[extra dimensions and entities called strings] to make new predictions[i think they predicted new particles but I'm not sure, it's been a while]. But to verify String Theory[and M-Theory] would take a particle accelerator with more power than we can generate currently.  :)

But luckily Loop Quantum Gravity is going through another test, after failing a previous one. Who knows, maybe LQP gets the prize of being theory of everything.  :pimp:

-- Bridget

How long DO you have to practice evading the issue at hand in order to make a post like THAT and say absolutely nothing about the issue that was discussed? I don't need a lecture in physics, thank you very much.

I'm going to quote myself here in order to turn this post to the problem that I was explaining in it:

Clearly, to describe the universe as a whole, correct is either one of the theories or neither. It can't be both because they are contradictory. Many physicists have said so, including my physics professor.
Link to comment
Share on other sites

Wrong again. Only if you use the current system of logic that is used on computers. Even that logic is imcomplete. There is nothing in Rand's statements that put consciousness equal to reality and identity. Not one damned statement of hers. If you say consciousness is equal to reality and identity, then you're a transcendentalist. Not a supporter of objective reality.

The current AI with If-Then-Else statements are incomplete, yes but with quantum computers and the new sets of language we're using on them, an AI can and will eventually be able to develop to handle ideas like context and so forth.

Also not one of you have shown where humans become volitional. Are we or are we not animals? We gestate and are born. We evolved from other species as a species. There is nothing special about our origins.

If you claim that we're

1. Not animals.

2. Didn't evolve

and 3. We don't develop overtime.

You'll pretty much disregard every bit of scientific data in the last 100 years that says so. I really want your views clarified. ^___^ This isn't exactly what I call rational thought if you don't accept what is known in science. :)

Current system of logic? What, the system changes overtime? What was true yesterday will no longer be true tomorrow? Is that the kind of "logic" that you think that computers should use in the future? All logic is based on the basic logical principles, which are in computers represented by TRUE and FALSE, IF-THEN-ELSE, etc. Using these, you can get anything - even the many valued logic or fuzzy logic, only if you apply the basic principles correctly. I don't see what novelties a quantuum computer should bring in that respect.

Most of what you say has nothing to do with anything I said. Only points to the fact that you are talking about something that is beyond you. What do you know about programming? What HAVE you programmed in your life? I've heard a lot of people who never programmed anything at all talk about programming a lot and saying more nonsense than I've heard until that point in my life in total. You're on the right track to make me relive the experience. At least I'll have a good laugh.

Link to comment
Share on other sites

I will take a look at the papers. In the meanwhile I can state that there is nothing special about QTMs that make them better than regular Turing Machines. Turing's hypothesis holds for all machines which follow a deterministic course, regardless of how they were made - bits and electrons (modern computer), biological cells, simple water running and switching levers, or subatomic particles. If you disregard the modern hysteria about random and 'weird' Quantum Mechanics, it will be seen that the subatomic reality, although unusual and different from our macro world, still follows the same rules and laws, and still obeys the laws of causality and identity. If this, what I said last, holds true for Quantum Mechanics (as it must), then Quantum Turing Machines necessarily fall under the category of a regular Turing Machine, and are therefore subject to the same conclusions.

The way I see it, there are two possibilities; either a) the physical world is entirely deterministic, yet still manages to produce volitional entities, in which case there is no reason why a fully deterministic computer program would not be able to do likewise, or B) the physical world is not fully deterministic, in which case there is no reason why a computer would have to be fully deterministic. Either option leaves open the possibilility for volitional computing. You seem to be wanting to say that, although nature is deterministic, the fact that computers are also deterministic means they are incapable of 'choice' or consciousness. I think that this is intrinsically self-contradictory.

1) You assume that the human mind is a "sufficiently complex" deterministic program of some sort, with volition fundamentally as an illusion, and only seen as 'free' on a macro level
No I don't. I think that if the human brain can be considered as a sufficiently complex deterministic program of some sort, then a sufficiently complex deterministic structure is somehow able to produce creatures with true volition. I've no idea how this is possible and it's something that hopefully science will clarify in the future, but I find it no more 'strange' than the idea that deterministic physical 'matter' is somehow able to produce creatures capable of perceiving other pieces of matter and having intentional experiences.

Do you not differentiate between the behavior of a biological entity and "code lines" that mimic such behavior? Is there no difference between the neural processes of a brain and execution of code in a computer program?

I'm not sure what you mean. A lot of physical phenomonen can be simulated by computer, and I can't think of any reason not to assume that this would also apply to the brain. If a program is capable of fully modelling a physical system then this would entail that 'lines of code' would be capable of replicating the physical structure of the human brain. Since I believe that consciousness, including volition, arises from this structure rather than from the properties of the physical material itself, this means that I generally refuse to rule out the possibility of computers with volitional concsiousness(although how we could ever know that a particular computer has actually obtained consciousness, let alone volitional consciousness, is an entirely different question...)
Link to comment
Share on other sites

The current AI with If-Then-Else statements are incomplete, yes but with quantum computers and the new sets of language we're using on them, an AI can and will eventually be able to develop to handle ideas like context and so forth.
I'm not entirely sure what you mean here. It is certainly possible at the present time to create AIs which operate according to non-classical logics, but you're still limited to the physical properties of the hardware itself. No matter how advanced your logic is, it's still going to be translated into basic 'if then else' statements of machine code, which are going to be executed on a CPU composed of deterministic transistors and other electrical components. The only way to get around this limitation is to either a) construct a computer which is somehow non-deterministic at the hardware level, if this is even possible, or B) to find a way in which these 'IF-THEN-ELSE' statements can somehow transcend determinism as a result of structural complexity, in the same way which matter in the human brain does according to some models.
Link to comment
Share on other sites

Hal, my point in participating in this thread was to show that Bridget's hopes for a "more powerful language" or a "more complicated algorithm" that will do what we can't do now, are unfounded. The languages we have now are as powerful as they are ever going to get. Future computer languages may feature faster performance or better memory efficiency, but they will never be able to accomplish something that modern languages could accomplish if they were simply given more time and more memory. That's the very essence of the Turing hypothesis.

I will refrain from contributing my own thoughts on this issue because I don't think this thread is serious or rigorous enough for a detailed discussion on this complicated subject. I will say this, however, that the mind can solve all problems in the universe, including the "uncomputable" problems that stump the all-powerful Turing Machines. We obviously solve something like the Halting Problem all the time; if it was impossible for man to solve, we would never be able to do computer programming.

Link to comment
Share on other sites

Um no. And again, no. You shift base YET AGAIN. You have neer invalidated anything I've said so far. You've only whined about this and that even though I never misused a single term what-so-ever.

You can stomp your feet on the ground and ignore what I say all that you want, but the fact remains that you do not know what you are talking about, and you are speaking jibberish. You have absolutely no idea what the concept of spacetime means. When you say "spacetime ... can form in arbitrary amounts," and "the amount of spacetime ... they use," you might as well have said "spacetime can hit a home run and run the 100 yard dash in under 10 seconds." You make utterly nonsensical statements. A sort of Alice In Wonderland grasp of physics.

How you have the audacity to pronounce judgments on Ayn Rand and the philosophy of Objectivism, while simultaneously displaying profound ignorance and such a muddleheaded psycho-epistemology, is truly amazing.

Link to comment
Share on other sites

Physicists who write those books often mention how quantuum mechanics and classic mechanics contradict each other.

I doubt that you can reference even a single instance of a superstring theory book written by a physicist which makes the claim you suggest. If by some chance you do find such an instance then you have found a very confused physicist. Knowledgeable physicists know that classical and quantum mechanics each have their own domain of applicability and there is no more contradiction between the two than there is a contradiction between the existence of Los Angeles and New York City. (Well, considering the rivalry between residents in these two cities, that may not have been the best analogy. :) )

Link to comment
Share on other sites

I'm not sure what you mean. A lot of physical phenomonen can be simulated by computer, and I can't think of any reason not to assume that this would also apply to the brain.

Is running a flight simulation on a computer the same as flying an actual plane?

If a program is capable of fully modelling a physical system then this would entail that 'lines of code' would be capable of replicating the physical structure of the human brain.

The difference between a model created in a computer is the same difference between imagining an event and experiencing it in physical reality.

Since I believe that consciousness, including volition, arises from this structure rather than from the properties of the physical material itself, this means that I generally refuse to rule out the possibility of computers with volitional concsiousness

It is difficult to understand precisely what you mean when you contrast "structure" from the "properties of the physical material itself." If you meant "structure" to refer to the whole integration of the physical material, and consciousness as an emergent property of this whole, that would mean one thing. But I suspect that you mean "structure" to mean some other sort of "whole,", something not requiring the "physical material" itself. Perhaps you can clarify.

Link to comment
Share on other sites

Is running a flight simulation on a computer the same as flying an actual plane?

I, of course, agree with your point here, Stephen. We must remember, however, that consciousness is an exception in the minds of people trapped in the errors today's cognitive sciences. For them, the analogy between a computer program and consciousness is a very literal one. A brain is just a piece of hardware computing the software that is the mind.

These people would agree that a flight simulator is not the same as flying an actual plane but they will emphatically disagree that a mind simulator is not actually conscious. In fact, computer "models" of cognition very often stand in place of real subjects in many psychological research studies these days. I found this quite stunning when I first discovered it but I have since learned that psychologists swallowed that horse pill long ago.

I'm not sure if this is Hal's position or not but it is the predominant view taught unflinchingly in university science and philosophy courses.

Link to comment
Share on other sites

Is running a flight simulation on a computer the same as flying an actual plane?
What do you mean by running a computer simulation? You could certainly learn to fly a plane by use of a computer simulation in a virtual reality environment, and I believe that pilots are often trained that way. There would be no phenomenological difference whatsoever between using a sufficiently advanced computer simulation, and flying an actual plane. If I've missed your point then please clarify what you mean.

The difference between a model created in a computer is the same difference between imagining an event and experiencing it in physical reality.
But there's no difference from the point of view of the person as he is experiencing it. When the imagining is sufficiently vivid (such as during a dream or hallucination) there is nothing to distinguish between what is purely a product of the mind, and what is real. It is only after one wakes up that the difference can be appreciated - ie only when viewed from the outside.

It is difficult to understand precisely what you mean when you contrast "structure" from the "properties of the physical material itself." If you meant "structure" to refer to the whole integration of the physical material, and consciousness as an emergent property of this whole, that would mean one thing. But I suspect that you mean "structure" to mean some other sort of "whole,", something not requiring the "physical material" itself. Perhaps you can clarify.
I meant the relations to which the parts stand in to each other. The world consists of relations as well as objects, and I was using 'structure' to describe the former. When I say that a computer could replicate the structure I mean that it could duplicate the relations using different 'non-material' objects. For instance a computer running a flight simulation would have to duplicate all the structural relations that flying involves (such as the sky being above the plane, the location of controls in the cockpit, and so on). The computer wouldnt be using 'real' objects (the 'sky' in the program obviously isnt the actual sky), but the relations between parts would be identical. Likewise when a child plays with a model airport, the structural relations of the toy duplicates the structural relations of a 'real' airport. The model plane is on the model runway, and the model pilot is in the model cockpit.
Link to comment
Share on other sites

I, of course, agree with your point here, Stephen. We must remember, however, that consciousness is an exception in the minds of people trapped in the errors today's cognitive sciences. For them, the analogy between a computer program and consciousness is a very literal one. A brain is just a piece of hardware computing the software that is the mind.

...

I'm not sure if this is Hal's position or not

An analogy is just an analogy. The hardware/software distinction provides a decent metaphor for a certain model of consciousness (mind produced by events in the brain), but that's about it. People generally try to explain the unknown by reference to the known, and it so happens that a lot is known about computers, and very little about the workings of consciousness.

I don't really think this is relevant to the AI debate though; the issue isnt whether the brain 'is' a computer, but whether something akin to the mind/brain could be replicated on a computer. Even if someone managed to produce a fully conscious/volitional/whatever robot tomorrow, it would no way imply that the brain 'is' a computer or that the mind 'is' a computer program.

Link to comment
Share on other sites

Hal, my point in participating in this thread was to show that Bridget's hopes for a "more powerful language" or a "more complicated algorithm" that will do what we can't do now, are unfounded. The languages we have now are as powerful as they are ever going to get. Future computer languages may feature faster performance or better memory efficiency, but they will never be able to accomplish something that modern languages could accomplish if they were simply given more time and more memory. That's the very essence of the Turing hypothesis.

I will refrain from contributing my own thoughts on this issue because I don't think this thread is serious or rigorous enough for a detailed discussion on this complicated subject. I will say this, however, that the mind can solve all problems in the universe, including the "uncomputable" problems that stump the all-powerful Turing Machines. We obviously solve something like the Halting Problem all the time; if it was impossible for man to solve, we would never be able to do computer programming.

Why is there any reason to believe humans can solve the Halting Problem? Does the following program terminate? :

int i=4; bool loop=true;
WHILE (loop) {
 IF FALSE (i is expressible as the sum of 2 prime numbers) THEN loop=false;
 i=i+1;
}

end;[/code]

Remember, being able to solve the Halting Problem means that you can have a decision procedure for checking whether ANY arbitrary program terminates, not just a particular one.

reason for editing: I forgot to add the i=i+1 statement.

Link to comment
Share on other sites

An analogy is just an analogy. The hardware/software distinction provides a decent metaphor for a certain model of consciousness (mind produced by events in the brain), but that's about it. People generally try to explain the unknown by reference to the known, and it so happens that a lot is known about computers, and very little about the workings of consciousness.

I can cite references from a dozen cognitive science and psychology textbooks that hold the computer/brain analogy to be entirely literal. It's not a view that I hold and it apparently isn't one that you hold either. That wasn't my point. My point was a generalization about the cognitive sciences.

I think that quite an awful lot is known about the workings of consciousness. From a philosophical standpoint, all of the fundamentals have been stated explicitly in several places in the Objectivist literature. Most notably in Introduction to Objectivist Epistemology and in Dr. Binswanger's lecture, The Metaphysics of Consciousness.

Link to comment
Share on other sites

I can cite references from a dozen cognitive science and psychology textbooks that hold the computer/brain analogy to be entirely literal. It's not a view that I hold and it apparently isn't one that you hold either. That wasn't my point. My point was a generalization about the cognitive sciences.
I know, I was clarifying my position since you mentioned me at the end of your post.
Link to comment
Share on other sites

Yes Hal, I can tell you an answer to your hypothetical problem within a finite amount of time. Calculating the answer to "is 'i' expressible as the sum of 2 prime numbers" takes a finite amount of time, and after that I can tell you whether that loop halts or not.

I know the Halting Problem is stated for any loops imaginable, but look at the empirical evidence. I brought up computer programmers for a reason: they solve the Halting Problem many times a day for all of the loops they have to deal with. Surely if you need evidence of men solving "as many instances of the Halting Problem as possible", look no further than the programmers. If they can solve it, what further evidence do you need? Infinitely many programmers working with infinite speed for infinitely many hours upon infinitely many loops?

Link to comment
Share on other sites

Yes Hal, I can tell you an answer to your hypothetical problem within a finite amount of time. Calculating the answer to "is 'i' expressible as the sum of 2 prime numbers" takes a finite amount of time, and after that I can tell you whether that loop halts or not.

I know the Halting Problem is stated for any loops imaginable, but look at the empirical evidence. I brought up computer programmers for a reason: they solve the Halting Problem many times a day for all of the looops they have to deal with. Surely if you need evidence of men solving "as many instances of the Halting Problem as possible", look no further than the programmers. If they can solve it, what further evidence do you need? Infinitely many programmers working with infinite speed for infinitely many hours upon infinitely many loops?

Sorry, I made a mistake in the original post which I've edited - I meant to add the i=i+1 line, which turns the loop into one which terminates if and only if the Goldbach conjecture, an unsolved problem in mathematics, is false. The point is that the loop would NEVER terminate if the Goldbach conjecture is true, therefore you couldnt decide whether the program would halt simply by continually testing all numbers. In order to work out whether it would terminate you would have to actually solve the Goldbach conjecture, which is something many mathematicans have attempted without success for several centuries.

A Turing machine can certainly decide if specific programs terminate, just like a human can. Turing machines could also decide for any arbitrary finite number of programs whether they terminate. But this isnt the halting problem - to solve the halting problem you need to somehow be able to tell if any program whatsoever terminates, which is something humans don't seem to be able to do either.

Link to comment
Share on other sites

Ok that's like asking a man who doesn't know Calculus, what is the sum of 1/2+1/4+1/8+1/16+...+1/2^n, as n approaches infinity. As he will be struggling with the answer you will point and him and laugh, saying how limited man's mind is. Then Newton will be passing by and he will set matters straight.

This is a similar issue. Just because there exists a certain unresolved problem in mathematics it does not follow that it will always be unsolved. Solving any math problem takes a finite amount of time so all of your challenges based on the premise of currently unresolved mathematical stumps are flawed.

Link to comment
Share on other sites

Ok that's like asking a man who doesn't know Calculus, what is the sum of 1/2+1/4+1/8+1/16+...+1/2^n, as n approaches infinity. As he will be struggling with the answer you will point and him and laugh, saying how limited man's mind is. Then Newton will be passing by and he will set matters straight.

This is a similar issue. Just because there exists a certain unresolved problem in mathematics it does not follow that it will always be unsolved. Solving any math problem takes a finite amount of time so all of your challenges based on the premise of currently unresolved mathematical stumps are flawed.

But then the exact same argument applies to Turing machines. Given any particular program or calculation, there's no reason why you can't find a Turing machine which decides whether it terminates.

Link to comment
Share on other sites

You can't have a Turing Machine that will solve the puzzle you posted above. You CAN have a Man who will solve that puzzle, namely a very smart mathematician who will solve the math problem. Invention and such are far more than simple matters of computation, and are therefore way above what a Turing Machine could even dream of, pun intended.

As another proof, Turing Machines are subject to the Incompleteness Theorem, while Men aren't.

There are other examples. All attempts to reduce the human thinking and volition to a simple computational process are doomed to failure.

Link to comment
Share on other sites

You can't have a Turing Machine that will solve the puzzle you posted above.
Why not? Computers have proved mathematical theorems before (not 'had the proofs programmed in', but actually managed to find a proof). It couldnt do it by iterating through all values of i (assuming the GC is true), but it might be able to find a proof by some other means, in the same way a human mathematician could.

As another proof, Turing Machines are subject to the Incompleteness Theorem, while Men aren't.
I think this is a far stronger argument against AI, but I've never really found it convincing. All it shows is that there's some statements of which a particular computer couldnt decide the truth value. I don't think its any more significant than saying that a human couldn't decide the truth value of "this statement is false" (assuming it has to be either true or false); indeed Godel sentences are basically just more complex versions of this statement anyway.
Link to comment
Share on other sites

I, of course, agree with your point here, Stephen. We must remember, however, that consciousness is an exception in the minds of people trapped in the errors today's cognitive sciences. For them, the analogy between a computer program and consciousness is a very literal one. A brain is just a piece of hardware computing the software that is the mind.

These people would agree that a flight simulator is not the same as flying an actual plane but they will emphatically disagree that a mind simulator is not actually conscious. In fact, computer "models" of cognition very often stand in place of real subjects in many psychological research studies these days. I found this quite stunning when I first discovered it but I have since learned that psychologists swallowed that horse pill long ago.

I'm not sure if this is Hal's position or not but it is the predominant view taught unflinchingly in university science and philosophy courses.

I too will wait to hear directly from Hal, but your overall point is right on the money. We have entire generations of students in the cognitive sciences growing up without any real understanding of the nature of consciousness, much less the nature of the external world. The "digital algorithms" and "AI" crowd are, in general, as divorced from reality as they come, holding fuzzy floating abstractions in place of valid concepts. Just look at the latest addition to our forum, "Bridget," aka "LadyAttis."

A smart man once said if you want to see what your face looks like, look in a mirror. If you want to see what your mind looks like, read what you write. Comparing the clarity and precision of thought of Ayn Rand to the hazy muddleheaded writings of this critic of Ayn Rand, paints as clear an image of their fundamentally different minds as one could ever perceive.

Link to comment
Share on other sites

I don't think its any more significant than saying that a human couldn't decide the truth value of "this statement is false" (assuming it has to be either true or false); indeed Godel sentences are basically just more complex versions of this statement anyway.

Not really. A Godel sentence says, in effect, "This statement is not provable using the axioms and deduction rules of this formal system." It doesn't claim its own falseness; it claims its own unprovability within the system.

A statement claiming its own falseness and nothing else is meaningless, as it does not say anything about reality. The sentence "This sentence is false" cannot be evaluated because it has no referent in reality; it is neither true nor false. By contrast, a sentence like "I always lie" does say something about things external to itself; it does not only claim its own falsehood but also the falsehood of everything else its proponent has ever said or will ever say. So, the sentence "I always lie" can be evaluated--and if the person saying it has ever told, or is ever going to tell, the truth, then the sentence is false.

Link to comment
Share on other sites

Not really. A Godel sentence says, in effect, "This statement is not provable using the axioms and deduction rules of this formal system." It doesn't claim its own falseness; it claims its own unprovability within the system.
I meant I didn't think that it had any more significance. A computer cant decide the truth value of a single statement (well, an infinite number of them, but even so). So what? There are many statements which a particular human doesnt know to be true, and it isnt hard to think of statements which a computer could prove, which an unaided human couldn't (what are the prime factors of 3^231^343 ?). Sure, a human could use a computer to calculate something like this for him, but then a robot could ask a human to tell it the answer to a Godel sentence.

A statement claiming its own falseness and nothing else is meaningless, as it does not say anything about reality. The sentence "This sentence is false" cannot be evaluated because it has no referent in reality; it is neither true nor false. By contrast, a sentence like "I always lie" does say something about things external to itself; it does not only claim its own falsehood but also the falsehood of everything else its proponent has ever said or will ever say. So, the sentence "I always lie" can be evaluated--and if the person saying it has ever told, or is ever going to tell, the truth, then the sentence is false.

---------------------------------------------

| 2 + 2 = 4

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...