Jump to content
Objectivism Online Forum

Hal

Regulars
  • Posts

    1212
  • Joined

  • Last visited

Everything posted by Hal

  1. Ok. There's probably a better and less complex example, but I couldnt think of one. Even this took me about 10 minutes to come up with and yeah, its incredibly artificial. -------- 2+2=4 -------- Statement C: The above box contains a true statement Statement D: Statement C is necessarily true. Now, by your logic earlier, Statement C is equivalent to "2+2 is a true statement" (this is the step you had to take in reducing my previous example to "Statement A says statement B" is false and viceversa). So, statement C is true. Statement D is also true, since '2+2=4' is necessarily true. But this has led to a false conclusion. I can go back and edit this post, and change the statement in the box to '2+2=12', in which case statement C becomes false. Therefore statement C isnt _necessarily_ true - it could well have been false. I didn't _have_ to write "2+2=4" in the box - I could have written anything I wanted. Therefore, statement D is actually false. (I'm not appealing to the standard necessary/contingent distinction here, and I'm in agreement that its seriously flawed, however this example would still work if you were to subsitute 'necessary' for 'metaphysical given' - its a 'man made' fact that I wrote 2+2=4 in the box, yet the statement 2+2=4 itself is metaphysically given). I'm going to have to take some time to think about the rest of your post, so I'll reply later.
  2. I've just spent some time reading my posts from yesterday and I'd like to apologise for what seems like the flippant nature of my remarks towards Stephen Speicher and RationalCop - my comments make it seem like I'm more interested in cheap point-scoring than actually communicating my thoughts in an honest way, and this was not my intention. I think I got too caught up in replying to individual points rather than trying to present my position in an integrated manner. I'd like to try and clarify where I was coming from, not in order to persuade people to agree with me, but rather to try and clear up any wrong impressions which my earlier posts created. I've encountered several suggestions for how an AI could be produced, but I'm going to concentrate on one which functioned by directly simulating the human brain, both because I find this the most plausible, and because this is what the discussion centered on yesterday. The first question to ask would be: what exactly does it mean for one thing to simulate, or to represent, another? In other words, if I want to create a representation of something, in whatever medium, what features must my creation have in common with the original? In order for it to be a actually be a representation it must have some things in common, but at the same time, there must necessarily be differences between the representation and that represented - if there were no differences whatsoever then it wouldnt BE a representation - it would be the original object. All representations have to differ in some ways from the represented object - this is built into the very nature of simulation. So, a representation must have some features in common with the original object, others not in common.Creating a representation then involves selecting which features of the original which you think are important enough for your representation to have - you can't get them all, so you need to make do with choosing the ones which seem important. Which features you choose will depend upon for the purpose for which you are interested in modelling the original phenomenon in the first place. To take a concrete example, lets return to the idea of simulating a plane. Now, the flight simulator will have to share some features of the actual flying experience, but which features these will be depends upon why you are actually making the flight simulator. For instance, let's assume we wish to train civilian pilots, ie those that will be engaged in flying passengers from one location to the other. In this situation the important things to simulate will be the controls of the plane - the cockpit, the gears, and so on. Other aspects of flying, such as talking to air stewards and ending up in a different place from where you started, will not be important for training pilots, and hence will not be included in the simulation. Now, lets assume we have a different purpose, that of training fighter pilots. In this case our requirements have changed - we will have to simulate different aspects of the flying experience in order for our simulation to be useful. As RationalCop pointed out, the key difference here is that of 'danger' - the element of risk plays an important part in the role of a fighter pilot, and the "civilian pilot training simulator" described above will not capture this, and will hence be inadequate. One solution to this problem would be to actually build the element of danger into the simulator itself. This could certainly be done, which is what I was trying to show with my 'poison gas' comment above (although I realise it sounded faceitious). There do seem to be ways in which an element of danger could be added to a flight simulator, which would actually make it a working model for training fighter pilots - we certainly _could_ have a model in which the pilots were actually at risk while using the simulator. Now, I doubt this would ever happen in a 'civilised' country, but perhaps a totalitarian country might train its pilots in this way. I once heard a story about the training of the Iraq soccer team - the players who failed to perform at a certain would be physically beaten. In this way, an element of 'danger' was added to the training sessions. But the important point here is that IF we thought that the simulation needed to replicate the danger of the actual fighter plane, this COULD be done. Now, let's consider a simulation of the human brain. Is this possible? Well again, the answer will depend upon what the simulator is trying to achieve. For instance, it may well be that in the future neurosurgeons will be trained to perform operations by using a virtual model of the human brain. In this case, the designer of the simulator would need to confer with biologists and other neurosurgeons in order to find out what parts of the brain would need to be built into the simulator. If someone wanted to simulate the brain for a different purpose, they would need to include different aspects of it in their model. In none of these cases would the simulator 'be' the brain, but it could approximate it enough to be sufficient for the task in question. Now that I've given a rough outline of what I think the simulating process involves in general, let's return to the original question - can the human brain be simulated for the purpose of producing consciousness in a machine? Or in other words, although our simulator will have to be different from the brain in SOME way (or else it would BE a brain rather than a simulation of a brain), can we simulate the parts of the brain which are reponsible for producing consciousness and volition? The answer to this obviously depends upon the brain itself - ie what parts of it ARE necessary for consciousness to arise? Now, I want to take a minute to distinguish between entities, and relationships between entities, because both are necessary to describe the physical world - we cannot describe reality by means of objects alone. Given a finite set of entities, there are many different ways in which they combine - ie many different relations which can exist between them. For instance given the letters A,B,C,D,E we could arrange them as follows: A B C D E or perhaps as follows A B C D E[/code] In each case the objects themselves are the same, but the structure is relations between them are different. I use the word 'structure' to refer to the sum total of relations between the entities. And, if a group of entities have a certain structure this structure can often be replicated using different entities. For instance, given the symbols $,%,&,!,} we could reproduce the second structure above, namely: [code]$ % & ! } So while the same objects can have a different structure, the same structure can also be formed by different objects. The question is, what features of the brain are essential for producing consciousness? Is it the entities themselves (the actual physical material), the structural relations between these entities, or a combination of both? The first alternative can be ruled out - as far as I know, the brain is made from the same material as the wooden desk I am sitting at, once we get down the subatomic level, and yet the desk is obviously not conscious. Likewise we could open up someone's head and cut their brain into two pieces, and although the same material would still be present in the brain, it would no longer produce the mind. Therefore consciousness is either produced by the structural relations of entities inside the brain alone, or this structural relationship in combination with those particular entities. If the structure is all that is necessary, then machine consciousness/volition would definitely be possible - individual neurons could be simulated in such a way that their relations in the simulation were identical to their relations in the human brain, and hence consciousness would be generated. If however consciousness depends upon not simply of the way entities are arranged, but also upon material property which they have, which a computer could not simulate, then replicating consciousness on a machine through simulating the human brain would be impossible (at least in our current models of computing). Therefore the answer to the AI question depends entirely on features of the brain, and consciousness, which are currently unknown. We do not know what features of the brain are essential for consciousness, hence the question 'Is AI possible?' cannot at present be answered yes or no. Perhaps on the future it will be decided one way or the other, but saying today that it is impossible is premature. Regarding whether it's an arbitrary question, I don't believe it is. Again, if you believe that is arbitrary then please specify what evidence would convince you that it is possible. And if the only evidence is the creation of AI itself, explain how you ever expect an AI to be created in the first place if everyone shared your view that it was arbitrary and hence didnt investigate the possibility of creating one.
  3. As I said to AisA, if you want to claim that artificial intelligence is arbitrary then that's a different matter. If that's your position then I do think you're wrong, but that isnt the position which people have been taking in this thread - they havent just been saying theres no evidence for it, they have been saying it is impossible. According to OPAR, arbitrary statements are neither possible nor impossible. And again, as I asked AisA, just what would you accept as evidence?
  4. It was an illustration. It's a fact that living robots are possible, regardless of appeals to movies or simulations - the Bladerunner reference was intended to show what I meant as a living robot, namely one which is capable of goal-directed action (not difficult to program), "death" in the sense of program termination (which is also not difficult to program), and all the other functions commonly associated with living beings (reproduction and the like). Living doesnt imply conscious though, obviously.
  5. I don't know, it depends how you're defining animate. If the brain is animate because its part of a living being, then so are the individual neurons which compose it, and the atoms which compose them. So are my teeth. I'd say that humans as a whole are animate, but I'm unsure whether I'd say that all parts of them are also animate. I don't think you can divide things up in that way; man is an integrated whole. When your definitions compel you to start referring to things like individual electrons as animate, I think you've gone wrong somewhere.
  6. Then the circuit boards making up a living robot are animate (I'm assuming that you accept living robots are possible. A robot could certainly be capable of death and goal directed action (just watch Bladerunner!) and could satisfy the other characteristics associated with living creatures - if a fly can qualify as alive, then so can a robot).
  7. Even a highly artificial counterexample is enough to disprove a general statement I think you could be right here. I can't decide. The problem seems to lie with whether its possible to substitute the contents of statement B with for the reference to statement B. It certainly seems valid here and I think you're correct that that statement A would be self-referential even though it doesnt appear to be at first glance. But I'm not sure that its purely self-referential. It's truth does depend on the truth of "3+3=x", which is something outside of itself. It doesnt seem right to say we can just remove the part of the statement which is known to be true - the fact we've had to look at the other statement at all shows that the truth of A depends upon it... The statement refers to itself, so translating it would give it a new reference since the statement itself changes. If we were to translate it into another language while preserving the reference of the original statement, it would have to be embedded within a statement of the new language, for instance "la phrase 'this statement contains 27 letters' est vrai" (not sure if this is correct; I don't speak French). You could also translate it as something like "cette phrase contient 28 des lettres" which preserves the meaning of the original statement, even though the reference is slightly different. There's other statements similar to the liar paradox that dont involve explicit self-reference though, and (to return to the original point) a human wouldnt be able to decide the truth of them, although we could arbitrarily assign it a truth value or say it doesnt have one. For instance, there is Quine's: "yields falsehood when appended to its own quotation" yields falsehood when appended to its own quotation reason for edit: just to clarify, I dont think issues such as this have any real significance in the AI debate, and I doubt Godel statements do either. It's unlikely that issues such as what is possibile in computing can be decided by appealing to who can do the most sophistical things with self-reference.
  8. Because it exists in an inanimate brain. It isnt exactly difficult to produce a living machine, its a conscious one that would be problematic. How would it be possible to show you an AI if noone has actually investigated the possibility of producing AI, since it is apparently arbitrary? Youre essentially saying that nothing new could be classed as possible until it had actually been produced. What would convince you that AI research is worth undertaking? So not only is there no possibility of a volitionallyconscious AI, but there is also no possibility of it ANYWHERE other than in humans?
  9. Yeah, but it would be highly highly arbitrary. This is why I included the 2+2=4 and 3+3=6 in the boxes. Surely statement A is making a claim about "3+3=6" as well as itself, hence it wouldnt be purely self-referential. To illustrate this, consider the following: -------------------- 2+2 = 4 A statement in the below box is false -------------------- -------------------- 3+3 = 7 A statement in the above box is false -------------------- (This is exactly the same as the example I gave before, except I have changed '3+3=6' to '3+3=7') Now, the statement A is undeniably true. 3+3=7 is a false statement, therefore the box does indeed contain a false statement. Therefore, the truth of A depends upon the truth of the statement "3+3=6". Therefore, A cannot be purely self referential, since its truth depends upon something other than itself. They wouldnt be the same sentence. I'm not sure what you mean here, can you clarify?
  10. I think that the fact consciousness is produced by the brain is sufficient reason to investigate the possibilities of it being produced by something else with a similar structure to the brain. If you don't think that this classes as evidence then please define exactly what WOULD. What could anyone show you that would convince you that AI is possible? If your standards of evidence are so high that they are impossible to satisfy even in theory, then I dont think that your demanding of evidence has any more signficance than a sceptic demanding evidence that we can know other humans are conscious. In any case, you need to decide what you're arguing. Is machine volition IMPOSSIBLE, or is it ARBITRARY? Most posts in this thread seem to have claimed the former. If you're actually asserting the latter then that's a different debate.
  11. I've no idea how Kant would have answered this, but speaking for myself, I'm not sure that it's possible to define logic in the way you want to, without either making it incredibly vague, or excluding large parts of what has traditionally been called logic. For instance, very little of what has been said in this thread seems to apply directly to specialised branches of mathematical logic, such as proof theory. Sure, you _could_ describe it as the 'art of non-contradictory identification', but this doesn't seem to capture the essentials of the discipline (non-contradictory identification is certainly involved, but then its involved in physics too), and I doubt it's what many logicians would use to describe their work. And if your definition of logic isnt one which would be accepted by most logicians then I think I'd be sceptical of it - it would be like defining 'science' in a way which excludes the activities carried out by the majority of biologists. I think most of the definitions of logic in this thread have centered around the day-to-day use of logic, rather than taking into account the more formal aspects (Aristotle's syllogisms and modern symbollic logic for example). If you're looking for one short definition which nicely captures EVERYTHING which gets called logic, then I dont think youre going to find one. I'd prefer to define our day-to-day application of logic as "the art/craft/techne of non-contradictory identification", then define symbollic logic as "the formal study of reasoning", mathematical logic as "the study of formal systems" and so on. Yeah, some of them are sort of similar, but I dont think that they all have something in common which you could use to formulate one nice definition. The word 'logic' is just too broad - it's like trying to give a definition of 'beauty' which captures what one of Mozart's symphonies, Helen of Troy, and the night's sky when seen from the countryside, all have in common. Socrates couldn't do it and I doubt I could either. Yeah, Kant was a pretty horrible writer. I doubt he meant it though - he completely rewrote one of the major parts of the Crtique because he felt people had misunderstood him, so I don't think he was deliberately trying to be obscure. It seems to be a feature of German philosophers that they either produce some of the difficult obscurantism imaginable, or beautiful prose which belongs alongside the work of poets.
  12. We could build this into the simulator if you think it's so important (see above). You havent mentioned any differences that cant be overcome. I've refused to address their implications because I don't believe that any relevant ones exist in the first place - you're begging the question. Give me some specific details which a simulation of the brain couldnt capture that would make it incapable of producing consciousness, rather than simply asserting that these details must exist.
  13. The simulation will always differ from reality in some way, but given any list of conditions which it is important to model, there's no reason why it cannot be made to do so. Stephen said that a difference between a flight simulator and a plane was that flying a flight simulator doesnt leave you in a different place from where you started. Fine. Put the simulator inside a moving vechicle. You claim that a difference is the lack of risk to the life of the pilot. Fine. Kill them if they make a mistake (you could program the computer to release poison gas into the simulator if they crash). The point is that given any list of 'relevant conditions', we could design a simulator that reproduces them. Yes there will always be some differences, but we can always design our simulator so that any specific set of conditions are fufilled. I've shown you how a simulator could have the 'ending somewhere different' and 'risking your life' built in, and I could probably do the same for most other differences you bring up. If you think there are things which the simulator cant capture, then its up to you name them.
  14. It depends entirely on the circumstances, and to what end its intended to be relevant. There's no relevant difference between (an advanced) flight simulator and a real plane when it comes to training pilots for instance. I doubt that theres any relevant difference between the brain and a 'simulation' of the brain when it comes to generating consciousness either. If interactions X Y and Z are sufficient to produce consciousness when performed by physical neurons, I don't think there's a reason to rule out the same interactions producing consciousness if performed by virtual neurons. There would be a relevant difference between the brain and the simulation if you wanted to open it up and perform neurosurgery however.
  15. Technically you can't perform substitutions like this indiscriminately, since it's possible to produce cases where doing so leads to false conclusions. But in any case, if I were to accept that your reasoning here was valid and that these statements did constitute self-reference, surely Godel sentennces would be self-referential too, and hence invalid? The entire Godel argument rests upon the claim that we know the self-referential sentence ("this sentence cannot be proved within system X") is true. However on your view, purely self referential sentences arent true, they are meaningless. In any case, purely self-referential statements aren't meaningless. "This sentence contains 27 letters" is undeniably true.
  16. Necessary and sufficient conditions is terminology from Leibniz which was very popular in Kant's day (and still today). I suppose you could just say 'condition' but adding 'necessary' removes potential ambiguity (is 'the rain' a condition for me getting wet? It certainly isnt a necessary condition, since I could become wet by jumping into the sea) I suppose you could see it like that, or you could just see it as trying to give a full analysis of logic as a whole, by means of its different parts. I disagree with the way in which he broke things down, but I dont see any problem with doing it in principle. The formal logic of the predicate calculus (for instance) is significantly different from the logic of critical thinking by which political arguments are assessed, and I think both would deserve a different (but related) treatment. In any case, I dont think these divisions of logic were Kant's innovation - I believe they were just the standard of the day. Aristotle broke his logic down into several different categories for instance (analytic v dialectic etc), and these generally survived. I would say that claiming Kant is a sceptic involves either a misinterpretation, or a _very_ loose standard by which one is judged sceptical. In IOE Ayn Rand makes a claim along the lines of "all knowledge is human knowledge - it must be from the human point of view alone", and Kant isn't really being any more sceptical than this. Peikoff claimed in OPAR that Kant implied the fact we only have 'human knowledge' means that we can't ever obtain "real" knowledge, but I can find nothing whatsoever in Kant that supports this interpretation. Indeed, Kant is generally thought to have been doing the exact opposite to this. The primary philosophical views of the day, as promoted by people like Locke and Leibniz, was that the knowledge of finite beings such as humans was grossly inadequate when compared to some kind of "God's eye" view which apprehended objects either directly without the i (Leibniz's view), or by being able to fully grasp objects in their particular without having to deal with concepts and generalisations (Locke's view). One of Kant's main points was that this is a completely fallacious standard by which to judge human knowledge. He categorised all views along these lines as 'transcendental realism', which he then stated as being completely opposed to his own philosophy.
  17. I suppose it depends whether the flight simulator is in the back of a moving truck. In any case I'm unsure why this matters - I'm not saying there are no differences whatsoever, just no relevant ones. I wouldnt class "finishing up somewhere other than where you started" as particularly relevant to the experience of flying a plane, otherwise there wouldnt be much point in test-flights where one landed at the same airfield from which one took off. If "acting as if it were conscious" involves having intentional first person experiences then there isn't a difference between 'being conscious' and 'modelling consciousness'. If the computer was just mimicing the behavior of how we would expect a human to act then it wouldnt be conscious. You're assuming the first scenario isn't possible, and I see no reason for this. The mind isnt a physical existent. In any case, I was illustrating what I meant by structural relations. I could picture an airport in my head, and that would have all the structural relations of real airports, without involving any physical existents.
  18. Huh? 2 statements each saying the other is false isnt self referential. Where is the self-reference?
  19. I meant I didn't think that it had any more significance. A computer cant decide the truth value of a single statement (well, an infinite number of them, but even so). So what? There are many statements which a particular human doesnt know to be true, and it isnt hard to think of statements which a computer could prove, which an unaided human couldn't (what are the prime factors of 3^231^343 ?). Sure, a human could use a computer to calculate something like this for him, but then a robot could ask a human to tell it the answer to a Godel sentence. --------------------------------------------- | 2 + 2 = 4
  20. I'd love to be able to sleep for less and still function properly, but I always feel tired if I get less than 6 hours sleep for a few days in a row I'm sure I recall reading somewhere that a lot of the famous light sleepers used to take several 1-2 hour naps during the day rather than getting a long sleep at night, but I'm unsure if this is true. And yeah, when you're weight training, lots of sleep is generally recommended in order to maximise muscle growth.
  21. Im unsure what your point is. Logic in those days was believed to be universal by pretty much all scholars, not just Kant. As far as I know Aristotle himself thought this; its hard to imagine him claiming that, if conscious life exists somewhere else in the universe, then "All As are B; x is A therefore x is B" could somehow be false for them. I'm not sure that assertion would even make sense. Logic remained relatively static from Aristotle up to Frege, so Kant's view that the logic of his time was both complete and universal is certainly understandable (I use the term 'logic' here in the traditional way, not to mean "the art of non-contradictory identification").
  22. Why not? Computers have proved mathematical theorems before (not 'had the proofs programmed in', but actually managed to find a proof). It couldnt do it by iterating through all values of i (assuming the GC is true), but it might be able to find a proof by some other means, in the same way a human mathematician could. I think this is a far stronger argument against AI, but I've never really found it convincing. All it shows is that there's some statements of which a particular computer couldnt decide the truth value. I don't think its any more significant than saying that a human couldn't decide the truth value of "this statement is false" (assuming it has to be either true or false); indeed Godel sentences are basically just more complex versions of this statement anyway.
  23. But then the exact same argument applies to Turing machines. Given any particular program or calculation, there's no reason why you can't find a Turing machine which decides whether it terminates.
  24. Sorry, I made a mistake in the original post which I've edited - I meant to add the i=i+1 line, which turns the loop into one which terminates if and only if the Goldbach conjecture, an unsolved problem in mathematics, is false. The point is that the loop would NEVER terminate if the Goldbach conjecture is true, therefore you couldnt decide whether the program would halt simply by continually testing all numbers. In order to work out whether it would terminate you would have to actually solve the Goldbach conjecture, which is something many mathematicans have attempted without success for several centuries. A Turing machine can certainly decide if specific programs terminate, just like a human can. Turing machines could also decide for any arbitrary finite number of programs whether they terminate. But this isnt the halting problem - to solve the halting problem you need to somehow be able to tell if any program whatsoever terminates, which is something humans don't seem to be able to do either.
  25. I know, I was clarifying my position since you mentioned me at the end of your post.
×
×
  • Create New...