Jump to content
Objectivism Online Forum

Why logic works

Rate this topic


Recommended Posts

I think skepticism is interesting and worth exploring because it poses the primary question of epistemology. "How do you know that?"
Well, once you pose that question, you have to move on. There really is nothing to explore, except perhaps the question of whether one wants to try to find a way to back out of the quagmire and proposed quasi-skepticism where one doesn't really believe skepticism, but puts it out there as an "interesting possibility". The question becomes totally uninteresting if preclude a method of answering it. To do that, you have to have a basis, for example you have to understand that knowledge is knowledge of something, and that the "something" is primary. I don't deny that much of western philosophy has been contaminated by this running battle with skepticism, but for Pete's sake, that is no reason to give credence to this silliness. It really boils down to one's primary -- whether one considers facts about consciousness to be most important, or facts of existence. The post-Upanishadic numb-nuts have been influential precisely because automatic gain-saying is so easy, and it doesn't rely on any specific knowledge of the universe that a person might not happen to have. I would venture to say that any competent Objectivist who grasps the epistemology can easily see why skepticism is a non-starter for serious philosophical discussion. The very concepts of "question" and "answer" can only be connected by granting two things. First, you have to presuppose that existence exists. Second, you even have to grant that man has a method of relating questions (about existence) to conclusions. You can deny or refuse to admit that questions exist (a special case of denying or failing to recognise that existence exists), or that there is such a thing as an "answer" (as opposed to a random utterance with no relationship to questions); but if you say that there are questions, you've already admited that existence exists.

I grant that not everybody is a competent Objectivist, so I don't deny the value of engaging the Hari Krishnas except I've done that and discovered, surprisingly, that they do not engage in reason, they engage in answering-machine style gainsaying.

Edited by DavidOdden
Link to comment
Share on other sites

  • Replies 112
  • Created
  • Last Reply

Top Posters In This Topic

I think skepticism is interesting and worth exploring because it poses the primary question of epistemology. "How do you know that?"

I think epistemology explicitly poses that question, in general. Skepticism specializes in using that question as the answer to everything. I don't think that's very interesting at all.

Edited by KendallJ
Link to comment
Share on other sites

That rhetorical move is a cheap trick and you know it. If I were actually a skeptic it wouldn't work in the first place, since there wouldn't be any propositions of mine to interrogate. The effectiveness of your sophistic ploy is dependent on the fact that I'm actually not a skeptic - which pretty much makes the whole thing pointless in the first place.

Well, I don't think its a cheap trick. Few people hold skepticism explicitly and completely; it's untennable. Most hold it only partially, and only to justify those unjustifiable positions that they do hold. So the effectiveness of David's statement is probably based on the proposition that you're part skeptic and so very effectively expose a contradiction.

Edited by KendallJ
Link to comment
Share on other sites

In other words, the Pyrrhonian believes that a foundationalist cannot rationally practice his foundationalism because it inevitably leads to arbitrariness — i. e., assenting to a proposition which can legitimately be questioned but is, nevertheless, assented to without rational support.

So, basically what this means is that Pyrrhonian skeptics believe that all propositions rest on other propositions, so that any proposition which claims, as its basis, something that is not another proposition is necessarily "arbitrary".

This is not, in fact, true. Propositions don't ultimately rest on other propositions, they rest on reality. Reality is observed through direct sensory perception.

If it were true, ALL statements would ultimately be arbitrary and therefore you could not claim to know anything, even that Pyrrhonian skeptics believe something. So, in asserting that there exists some such thing as a Pyrrhonian position on anything, you're in fact contradicting their proposition.

Off you go.

Link to comment
Share on other sites

cmdownes,

A few disclaimers: 1) I'm terrible with manipulating forum controls, so I might frame the quotes improperly 2) I know nothing about Pyrrhonist philosophy, so (at your option) you can claim an argumentative win by default (though I think I will have a thing or two to say that's worth considering) 3) I've only read the thread up to _this_ post that I'm responding to.

Greg,

Haha, alright, we'll see. I'm not a skeptic in the classical sense, I was just saying that BrassDragon had misrepresented their burden in the discussion of foundational truths. But I think Pyrrhonism generates an interesting discussion contra Rand on foundational truths. I'll defend it here for the sake of argument.

I guess I agree on the level of interest. Arguments such as yours aren't frequently offered at the major Objectivist forums, so it's a nice change of pace if anything.

My (arguendo) position isn't agnosticism about the truth of logic, as such. It's agnosticism about all propositions.

Are you referring to BrassDragon's "Because the axioms are presupposed in all knowledge, anyone who tries to argue that one of them is false can easily be defeated in an argument." ? If so, I don't know how it can still apply. One can't contradict oneself unless one makes a claim. Since a Pyrrhonian skeptic doesn't assert any propositions, they can't contradict themself.

Withstanding that I'm unlikely to elaborate enough to cut-off many counter-arguments, I would provisionally say that:

Frankly, I would suspect that you're already conceding enough. It wouldn't surprise me if the Modern academics attempt to offer a substantive difference between valid logic and propositions. (I would suppose that they might make such an argument based on a side effect of the Analytic-Synthetic Dichotomy i.e. I would argue that many epistemic dichotomies emanate from the A-S Dichotomy, but then that's really for another thread.....)

Unfortunately, I don't know enough history to offer a generalization to hazard a guess as to what Greek students would think. I _do_ think that there's a tenable risk in assessing Greek philosophy via Modern academia! Now, if you are coming to certain conclusions based on self-study of the Greeks, then that might warrant a wholly different type of response from me....

The quick answer regarding BrassDragon's quote is "yes, and yes". That is, without recalling/looking at what he was specifically responding to, I would likely agree with his sentiment. Let's put it this way, if the Pyrrhonians withhold judgment such that they aren't even willing to offer arguments on principle (regardless of motives), then they really have no business engaging in discussion for other interests.

Granted, the value of the axioms can be referenced in more then one way, but their nature has to be dealt with in a single way. It simply must be conceded that for any discussion viz. for any axiomatic usage to take place, those axioms must be considered valid as a matter of course.

This isn't the same(!) as saying, "Oh, the axioms can't be argued over." The point (in this case) doesn't fundamentally focus on content (of a specific argument.) It's the fact that an argument has taken place _at all_ that is at issue.

Well, they grant themselves one sort of proposition, but I don't think it alone can ever generate a genuine contradiction. A Pyrrhonian can advance the claim, "P, or not P, or neither P nor not P". They just withold judgment on which elements of the disjunct obtain.

(I normally don't respond in a piece-by-piece fashion, but I don't think I can even make much sense otherwise if I don't respond as such in this case...) I believe such a person i.e. a Pyrrhonian would be attempting to argue a non-argument. While it's _physically possible_ to write the above type of argument, I would counter that such an argument violates the rules of logical argumentation (assuming _also_ that my memory hasn't failed here.)

Yes, in symbolic logic (and in computer programming) tautologies are used, but they are normally used for purposes such as refactoring terms. In philosophy, context _can not_ be factored out! Now discussing context is too great a subject to do more than reference here, but I will say that re-contextualizing an argument is _not_ the same issue as that of resorting to Subjectivism or relativism. (I don't mean to dodge this issue, but there are other forum threads that focus on context of knowledge, and Ayn Rand and Leonard Peikoff have major works that elaborate the concept at great length....)

(Greg's snip)....The skepticism I'm defending here is really more ancient Greek than Modernist in bent..... (Greg's snip)

Well, Pyrrho was more of a All the evidence in the universe is ultimately subject to the same mulish refusal to grant assent to its alleged truth and the demand for further justification. The Pyrhhonian doesn't even have opinions, in the end, just a state of "ataraxia" attained through their methodological skepticism. Ataraxia is what Pyrrhus called the detached and balanced state of mind that supposedly arises from the tranquility of total, global skepticism.

Dictionary.com confirms what you've stated (unsurprisingly) without much of any elaboration. I have to say that the ataraxia reminds me of Eastern philosophy, but (of course) such a recognition isn't an argument. I would refer back to what I stated above, but I imagine that you would want more than that. I would start to further elaborate by contending that achieving piece of mind requires work. Here are something to consider that further specifies the context of our argument. We live in societies where we are offered various types of (other) arguments e.g. sports plays, legal contracts, romantic fidelity, and we must work to resolve them. Let us say hypothetically, that all involved wish to have the same sort of piece of mind (by any name). Recognizing context _does_ involve elaboration itself. That is, as we would seek to achieve social harmony we would ultimately have to come to greater knowledge viz. a much more _specific_ understanding of truth. This work is simply impossible to obtain without exchanging syllogisms. That is, (ultimately as I suggested before) it would be incumbent on the Pyrrhonians to make a valid normative counter-offer that is mutually satisfying for all involved. That is, the Pyrrhonians would have to offer a plan of action whereby people can come to an increasing understanding of their respective situations. How would this actually be achievable without logical argumentation?

Pyrrhonism is not an argument. Pyrrhonians are not concerned with convincing you of the truth or falsity of any propositions. They are not Socrates, saying "I know nothing but the fact of my own ignorance." They would see even that as a sort of dogmatism, since it holds at least one foundational belief. It's just the methodological employment of universal skepticism. But it is a really old and interesting challenge to foundationalist epistemologies, like Rand's, and it can't be dismissed quite so brusquely.

It would be (itself) a type of fallacy of equivication to refer to argumentation and dogmatism as exchangable. Put differently, given various natural conditions e.g. Man as fallable, Man as non-omniscient, et al., individuals simply can not altogether or indefinitely avoid argumentation. As at least a point of reference, I would have to mention that your original entry into this forum with a post further establishes the Objectivist position _because_ of your action (again, regardless of motivation).

If what I've written strikes you as thoroughly redundant, then consider another approach:

Try something along the lines of a Devil's advocacy against Objectivism like so: Try to imagine a world where people _try_ not to engage each other with logical argumentation _at all_. Obviously, the Western world in every possible respect would be completely eliminated. Further, I would argue that such an attempt would put Man in a state that is _sub-primordial_. (Neanderthals communicated.... even if in a comparatively inept fashion.... As evidence of this point, consider that they apparently developed tools, and consider what _that_ would entail.)

It hurts a bit, but being wrong hurts a lot. I'm not a Pyrrhonian, like I said, but witholding judgment in the absence of compelling reasons for belief is sometimes appropriate.

Likewise, it would seem that Pyrrhonian skepticism is apparently a far cry different from _any_ sort of healthy or unhealthy Modern-day skepticism as is normally practiced, but then I trust that I've already given an indication why.

The ascetics and Buddhists have failed to achieve a sort of ataraxia, I'm more than a little skeptical(!) of Pyrrhonians' ability in this regard. ;-D

Link to comment
Share on other sites

She wants to know why Fred thinks that b is true. Now, Fred could respond by giving some reason for thinking that b is true...

If Fred was an Objectivist, at this stage of the argument he would just silently point at reality. I don't know if that makes him a foundationalist or not.

Link to comment
Share on other sites

I remembered my physics professor once saying that nobody knows why laws of nature can be described using mathematics.

In Objectivism, logic is non-contradictory identification of the facts of reality as given to us by perception and observation. So, there is no great mystery as to why logical statements work; and mathematical equations are logical statements parsed down to symbols that represent concepts and mental operations (measurement equivalency) with regard to that which they are referring to -- i.e. identified facts of reality based on observations.

All such equations are based on the idea that 1=1, that a measurement holds true (in a context) for all time and everywhere, so that, for example, one Newton here on earth is equal to one Newton on the moon or one Newton on the other side of the galaxy and beyond. 1N=1N. That is why we can develop mathematical equations that can get our equipment from here to beyond the solar system in a very controlled manner, even though we must use gravitational sling shot effects to get us there (using equations that work).

I would recommend reading Introduction to Objectivist Epistemology by Ayn Rand for further elaborations of the connectedness of concepts to reality.

As to the question of why things are what they are, it is because they exist. To exist is to be something. Identity is not something that is super-added onto the fact that it exists. To be is to have identity; and to have identity means that it can do certain actions and not others, including changes that are possible to it -- i.e. ice melting, wood burning, stars glowing, etc.

The axioms of existence and identity (and the corollary causality) come from observation; the observation of reality, the nature of which we directly experience via perception and observation. When one conceptualizes these observations, we get not only the axioms, but also those wonderful equations that can be so predictive, because they are based on observations and the consistency of existence.

In other words, good scientists do not make up equations out of thin air, but rather base them on observations, as Newton did, by conceptualizing a relationship between things observed -- i.e. things falling and the moon orbiting. And, like concepts, the equations hold true for the observed range that led to the development of the equations; they are, in fact, a conceptualization of that range (which is why they don't hold outside of that range -- i.e. near the speed of light nor in super strong gravity). To get equations that hold true for a wider range, one has to integrate other observations.

If someone wants to call this "foundationalism," I suppose that is Ok, so long as one understands that perceptual observation of reality is the foundation of all human logical thought.

Link to comment
Share on other sites

...and mathematical equations are logical statements parsed down to symbols that represent concepts and mental operations (measurement equivalency) with regard to that which they are referring to -- i.e. identified facts of reality based on observations.

This is false. Mathematical equations are derived by purely syntactical means. Computers do algebra, and they have no knowledge whatsoever of "the facts of reality." Likewise for automated theorem provers.

The interpretation of the equations does require human cognition, but there is no requirement that the undefined terms in an axiom system be assigned physical meanings. Indeed, one of the most useful mathematical constructions is an isomorphism between two axiom systems. For example, showing that a finite geometry is dual can be helpful in many situations.

All such equations are based on the idea that 1=1, that a measurement holds true (in a context) for all time and everywhere, so that, for example, one Newton here on earth is equal to one Newton on the moon or one Newton on the other side of the galaxy and beyond. 1N=1N. That is why we can develop mathematical equations that can get our equipment from here to beyond the solar system in a very controlled manner, even though we must use gravitational sling shot effects to get us there (using equations that work).

You are making this up. Measurement isn't even necessary for much of mathematics. Indeed, topology is the study of space in the absence of a measure. Lest you scoff, topology has been extraordinarily useful for physicists who do make measurements.

I would recommend reading Introduction to Objectivist Epistemology by Ayn Rand for further elaborations of the connectedness of concepts to reality.

I would too. I wouldn't recommend it for a discussion on metalogic, though. And this is a discussion of metalogic.

As to the question of why things are what they are, it is because they exist. To exist is to be something. Identity is not something that is super-added onto the fact that it exists. To be is to have identity; and to have identity means that it can do certain actions and not others, including changes that are possible to it -- i.e. ice melting, wood burning, stars glowing, etc.

This is undeniably true, but sophomoric by incompleteness. Identity can be regarded in many ways. Mathematics, for example, can be thought of as the study of different kinds of identity.

Geometry is the study of congruence (isometric isomorphisms), i.e. things identical under a measure. Topology studies (among other things) continuous homeomorphisms, i.e. things identical under continuous transformations. Graph theory studies discreet homeomorphisms, i.e. things identical under discreet transformations. Algebra studies isotone isomorphisms under the field axioms, i.e. things identically ordered under the field axioms. Abstract algebra studies algebraic homomorphisms and isomorphisms, i.e. transformations that preserve symmetric structure.

We can regard identity in many ways, and we can precisely specify them without contradiction. We specify and study them simply because they are useful.

In other words, good scientists do not make up equations out of thin air, but rather base them on observations, as Newton did, by conceptualizing a relationship between things observed -- i.e. things falling and the moon orbiting.

That's not at all clear. In fact, it's probably false. To measure and predicate about change, he developed the calculus based on concepts of infinity and "vanishing quantities" that were anything but observed, and have never been observed. He developed the new techniques only because they were useful for his purposes, predicating of change.

Warranting in your favor is that Newton used the calculus, as did his successors, just because it worked. They used it without proof, and such proof was hundreds of years away. This could be considered an inductive justification. On the other hand, once the calculus is made rigorous, these unobserved infinities become even more prominent.

I don't follow your meaning. Can you explain what you mean by useful?

Sure. By 'useful,' I mean "helps achieve some purpose." By 'more useful,' I mean "better helps achieve some purpose."

Link to comment
Share on other sites

Axioms are chosen because they are useful. When axioms cease to be useful, we discard them and craft other more useful ones.
That is one view of "axiom". Another is that an axiom is a self-evident, undeniable truth. The difference is that the "useful" view does not relate axioms to reality, and cannot say anything about what is a valid "use", i.e. there is no notion of central purpose, for example "stating relationships that hold in reality".
Link to comment
Share on other sites

That is one view of "axiom". Another is that an axiom is a self-evident, undeniable truth. The difference is that the "useful" view does not relate axioms to reality, and cannot say anything about what is a valid "use", i.e. there is no notion of central purpose, for example "stating relationships that hold in reality".

Self-evident, undeniable truths are extraordinarily useful, and for the reasons you cite. Usefulness seems to be sufficient.

It's unclear we would have any of the axioms sof set theory, fields, etc. if we required self-evidence as a criteria of axiomatics. So, the "useful" view is powerful enough to accomodate both "self-evident, undeniable truths" and the axiomatics of abstract mathematics.

The wider scope of the "useful" view tends to make it more useful. ;-) Although, I admit many problems, especially pedagogical problems, may benefit from the narrower "self-evident" definition.

Link to comment
Share on other sites

It's unclear we would have any of the axioms sof set theory, fields, etc. if we required self-evidence as a criteria of axiomatics. So, the "useful" view is powerful enough to accomodate both "self-evident, undeniable truths" and the axiomatics of abstract mathematics.
The concept "postulate" is sometimes, but not always, seen as being interchangeable with "axiom". If your starting point is that the nature of reality is not your concern and that you're only interested in syntactic results, there is no benefit to distinguishing real axioms and arbitrary postulates. From a different starting point, where you are interested in the nature of knowledge, reasoning and the nature of existence, it is mandatory to distinguish the worthy -- the axiomatic -- from arbitrary statements that don't describe reality and don't describe how reasoning works.

I don't have any objection to mathematicians constructing arbitrary systems per se, and even though all mathematicians that I know are damn Platonists, they are sufficiently platonic Platonists that I don't think they really believe that there is an alternative universe where you find irrational numbers bobbing about in glowing blue orbs. If they don't know how to develop the mathematical tools that lead to the Fourier Transform or any other method, being grounded in the physically self-evident, that doesn't bother me. What is important is to understand that mathematical describability does not mean "this is what reality is".

Link to comment
Share on other sites

even though all mathematicians that I know are damn Platonists [...]

Ouch. Using this statement as my axiom (because it is useful to do so), either a) you don't know me, b ) I'm not actually a mathematician, or c) I'm actually a Platonist. Since a and b can't be true, I must be a Platonist. Of course, once I amend my epistemology I'll have to stop being a mathematician...

Link to comment
Share on other sites

Ouch. Using this statement as my axiom (because it is useful to do so), either a) you don't know me, b ) I'm not actually a mathematician, or c) I'm actually a Platonist. Since a and b can't be true, I must be a Platonist.
Or, I don't know you :lol:.
Link to comment
Share on other sites

I don't have any objection to mathematicians constructing arbitrary systems per se, and even though all mathematicians that I know are damn Platonists, they are sufficiently platonic Platonists that I don't think they really believe that there is an alternative universe where you find irrational numbers bobbing about in glowing blue orbs. If they don't know how to develop the mathematical tools that lead to the Fourier Transform or any other method, being grounded in the physically self-evident, that doesn't bother me. What is important is to understand that mathematical describability does not mean "this is what reality is".

Mathematics, per se, is not reality. In fact, mathematics abstractly done, has no empirical content whatsoever. Mathematics is the language by which physical reality may be describes and its laws (basic nature) expressed. In 1623* Galileo crafted a famous metaphor that is still often cited by scientists. Nature, he wrote, is a book written in "the language of mathematics". If we cannot understand that language, we will be doomed to wander about as if "in a dark labyrinth".

Galileo made this observation before Newton was born. By the way, Newton was a "damn Platonist" too. It did not hurt his physics one bit.

As to mathematicians being Platonists (and they are when they are doing mathematics), consider this. Would a mathematician dedicate over half his adult life to his discipline and his art if he did not believe the stuff he sweats over exists? I doubt it. While they are working over hot differentiable manifolds and Cartesean Topoi, you may rest assured they take their non-empirical, non-material objects quite seriously. If mathematicians were not "damn Platonists", you would not have GPS to help you find out where you were.

Did Ayn Rand consider her characters -real-? I would bet she did while she was working on her novels. Was Ayn Rand deluded? I think not. I believe she took her characters quite seriously.

-------------------------------------------------------------------------------------------------------------------------------

*'Philosophy is written in this enormous book which is continually open before our eyes (I mean the universe), but it cannot be understood unless one first understands the language and recognises the characters with which it is written. It is written in a mathematical language, and its characters are triangles, circles, and other geometric figures. Without knowledge of this medium it is impossible to understand a single word of it; without this knowledge it is like wandering hopelessly through a dark labyrinth.' -- The Assayer, 1623

Bob Kolker

Link to comment
Share on other sites

This is false. Mathematical equations are derived by purely syntactical means. Computers do algebra, and they have no knowledge whatsoever of "the facts of reality." Likewise for automated theorem provers.

<snip>

The interpretation of the equations does require human cognition, but there is no requirement that the undefined terms in an axiom system be assigned physical meanings.

<snip>

You are making this up. Measurement isn't even necessary for much of mathematics.

<snip>

I wouldn't recommend it for a discussion on metalogic, though. And this is a discussion of metalogic.

Obviously, you mean something other than non-contradictory identification of the facts of reality as given to us by perception when you use the term "logic." The term "logic" does not mean "whatever you can do in your head that doesn't contradict whatever else you can do in your head."

In reality, keeping your ideas tied to reality is a virtue -- it's called rationality. Keeping your ideas related to one another based on definitions not tied to observables is a vice -- it's called rationalism.

A lot of what goes on in mathematics are basically concepts of method -- i.e. what your mind does with the facts once you grasp them. Abstracting from the facts by focusing on what your mind does gives one the method. However, this does not mean that once one has the method that one can then go on doing method without any content. Without the content, there is no method -- i.e. not anything for your mind to operate on or to do anything with.

That is, if one were raised in an isolation chamber or a sensory depravation chamber, there wouldn't be anything for the mind to consider -- neither content nor method (which comes from understanding what the mind does with content). You wouldn't even be free to imagine whatever the heck you wanted to imagine, since imagination is simply re-arranging the content of consciousness. There wouldn't be any content of consciousness without perception.

In many ways, getting a modern degree in mathematics, physics, or philosophy is like raising somebody in an isolation chamber past a certain age -- an isolation from reality, as one learns to relate previous content to previous content until one no longer knows where it all came from.

For example, calculus did come from observation -- the observation that long strings of re-iterative addition or re-iterative subtraction of simple equations led to definitive results that could be summarized by a more simple method that was at least a very close approximation of such re-iterative simple mathematics.

And one way that computers do calculus is that they are very good at doing re-iterative operations; so, in a sense, they are doing what Newton had to do before coming up with calculus. Of course, computers can be programmed to do more complicated operations, so some of them might be doing the actual calculus instead of the re-iteratives.

Of course, however, computers aren't actually doing mathematics. They are programmed to do certain processes on data stored as charged areas in their memory (1's and 0's), which is not the same thing as a human being grasping that 1+1=2.

Link to comment
Share on other sites

As to mathematicians being Platonists (and they are when they are doing mathematics)

I strongly disagree. Proper mathematics does result from the abstraction from reality, from concretes. The concepts of numbers came from abstracting away the things being numbered (i.e. I have 3 goats and 3 oranges, these groups are similar some how... that somehow is called a number). All good mathematics is either such an abstraction from concretes, an abstraction from abstractions, or abstractions from combinations of concretes and abstractions. Because we are dealing with a very simple factor in mathematics (that factor being units of measure), we can rigidly formalize and define our terms in ways that simply are not possible in other disciplines, but that doesn't mean those formalizatoins and definitions are arbitrary. You show me an example of an arbitrarily defined mathematical object and I'll show you a mathematical object that has absolutely no use in mathematics or in reality.

Link to comment
Share on other sites

Obviously, you mean something other than non-contradictory identification of the facts of reality as given to us by perception when you use the term "logic." The term "logic" does not mean "whatever you can do in your head that doesn't contradict whatever else you can do in your head."

In reality, keeping your ideas tied to reality is a virtue -- it's called rationality. Keeping your ideas related to one another based on definitions not tied to observables is a vice -- it's called rationalism.

You simply do no know what you are talking about.

Most of the errors on this thread so far arise from ignorance of the Soundness and Completeness Theorems. No offense to anyone: if you do not understand these theorems, and you hold strident onions in metalogic, then you literally do not know what you are talking about. This is an objective fact.

Soundness Theorem: If T ├ P, then T ╞ P.

Completeness Theorem: If T ╞ P, then T ├ P.

SC: T ├ P <=> T ╞ P.

The Soundness Theorem tells us that our deductions (which are purely syntactical) lead only to "correct" conclusions (considered semantically, i.e. from the aspect of truth). The Completeness Theorem tells us that every valid inference (which is semantic) has a deduction (considered syntactically). In other words, in formal logic the truth values of syntactic deduction and semantic inference are the same (SC).

Formal systems, such as formal logic, rank as one of the highest achievements of the human mind because they construct languages in which semantics are equivalent to syntax. In formal systems, a valid syntactical deduction will produce a true semantic inference. For example, if we start with a true algebraic sentence, and we apply the proper grammar rules of algebra, then we are guaranteed to get a result that will be true under any consistent interpretation of the algebraic symbols.

Then if 'x + y = 12' is true, we are guaranteed that 'x = 12 - y' (a purely syntactical transformation) is true under ANY consistent interpretation. We could interpret 'number' to be a quantity, a length, a matrix, an English sentence (under an appropriate interpretation of the algebraic operators), or even a floating abstraction. It doesn't matter the interpretation, so long as it makes the field axioms of algebra true; syntactical deductions are guaranteed to produce equivalent semantic conclusions.

I grant that logic as a method derives from observations about the world. I deny that logic can only be applied to observation-concepts and derivatives of observation-concepts. Logic can be applied to any concept whatsoever. Indeed, that's the only way we could possibly know that the chain from observation to concept has been broken! We would have to apply logic to the floating abstraction.

Abstracting from the facts by focusing on what your mind does gives one the method. However, this does not mean that once one has the method that one can then go on doing method without any content. Without the content, there is no method -- i.e. not anything for your mind to operate on or to do anything with.

This is false and demonstrably so. Pick up any book on mathematical logic. Turn to any page for a refutation. You have no idea what you are talking about.

In simple propositional logic under modus ponens, "(p => q) <=> (~q => ~p)' is true no matter what you put in place of p and q (under the usual rules for naming consistency). Floating abstractions and all. There is no 'content' as you use the term, and it is useful in the real world for PRECISELY that reason. It doesn't matter what proposition you put in place of p and q, the tautology holds. The chemist doesn't need to check all the logical theorems. The engineer doesn't either. Neither does the local dog groomer. Logic works for every interpretation, subject to the usual constraints of consistency.

Can we assign a truth value to '(p => q)' without 'content'? No. Can we assign a truth value to '(~q => ~p)' without 'content'? No. But we can make a very important statement about the structure: the two sentence will definitely have the SAME truth value, no matter what. That's why we can progress to important knowledge, knowledge of logical structure, without 'content.'

Again, this is an objective fact of logic.

For example, calculus did come from observation -- the observation that long strings of re-iterative addition or re-iterative subtraction of simple equations led to definitive results that could be summarized by a more simple method that was at least a very close approximation of such re-iterative simple mathematics.

The calculus method is to develop an estimation procedure and then take an infinite limit on it. The concept of infinity are unobserved. So, parts of calculus are taken from observation, but the important bits are not.

But that's ok. Logic comes to the rescue. It doesn't matter the 'content,' even unobserved, uncountable infinities. The theorems of logic work, and correct syntactical transformations of algebra are guaranteed to deliver correct conclusions under any consistent interpretation.

Newton's interpretation was consistent.

And one way that computers do calculus is that they are very good at doing re-iterative operations; so, in a sense, they are doing what Newton had to do before coming up with calculus. Of course, computers can be programmed to do more complicated operations, so some of them might be doing the actual calculus instead of the re-iteratives.

Of course, however, computers aren't actually doing mathematics. They are programmed to do certain processes on data stored as charged areas in their memory (1's and 0's), which is not the same thing as a human being grasping that 1+1=2.

Wrong again. By the SC Theorem computers are doing mathematics, and the semantic results of their correct syntactical computations are guaranteed to be correct.

Under your theory, the computations of a computer cannot be guaranteed to produce correct result even under correct operation. The computer at no time has knowledge of "the facts of reality" and so the chain of observation-concepts is broken. We would give up a lot under your theory.

Fortunately, much more informed minds proved the SC Theorem. We can trust computers.

Logic works because of the SC Theorem.

Link to comment
Share on other sites

The concept "postulate" is sometimes, but not always, seen as being interchangeable with "axiom". If your starting point is that the nature of reality is not your concern and that you're only interested in syntactic results, there is no benefit to distinguishing real axioms and arbitrary postulates. From a different starting point, where you are interested in the nature of knowledge, reasoning and the nature of existence, it is mandatory to distinguish the worthy -- the axiomatic -- from arbitrary statements that don't describe reality and don't describe how reasoning works.

I don't have any objection to mathematicians constructing arbitrary systems per se, and even though all mathematicians that I know are damn Platonists, they are sufficiently platonic Platonists that I don't think they really believe that there is an alternative universe where you find irrational numbers bobbing about in glowing blue orbs. If they don't know how to develop the mathematical tools that lead to the Fourier Transform or any other method, being grounded in the physically self-evident, that doesn't bother me. What is important is to understand that mathematical describability does not mean "this is what reality is".

I completely agree. Mathematicians concern themselves with this, too. Model Theory is the branch of metalogic that characterizes how axiomatic systems can be descriptive of phenomena and other axiomatic systems.

Howver let's not be too hard on pure mathematicians. They have given us many useful tools, for example Number Theory and Topology, even if it took hundreds of years to find a use for them.

Link to comment
Share on other sites

I completely agree. Mathematicians concern themselves with this, too. Model Theory is the branch of metalogic that characterizes how axiomatic systems can be descriptive of phenomena and other axiomatic systems.

Howver let's not be too hard on pure mathematicians. They have given us many useful tools, for example Number Theory and Topology, even if it took hundreds of years to find a use for them.

Who would have dreamed (at the time) that Gauss' "Queen of Mathematics" (i.e number theory), would lie at the very heart and gut of difficult to break codes? RSA and Diffie-Hellman techniques are based squarely on good old purely theoretical number theory which was believed by number theorists themselves to have no practical application.

I also doubt that Bernhard Riemann dreamed that his geometry of curved manifolds would some day be the very heart and gut of gravitation theory.

The next act will be some profound physics discovery flowing out of zeta functions.

Bob Kolker

Link to comment
Share on other sites

I'm not denying that a mathematician can operate on a very abstract level, using abstractions from abstraction from abstractions. But those abstractions don't mean anything unless they are tied to reality. Meaning does not come about because one has a symbol related to another symbol by means of a third symbol. The symbols related to symbols in the sentences I have just used have no meaning, unless they refer to something in reality. And it is only by doing this that one can say a statement is either logical or it is not.

And abstractions from abstractions are not empty, they contain all of the referents subsumed under the abstraction. A wide abstraction, such as "animal," contains other abstractions, such as "dog," "cat," "horse," "pig," "snake," etc. which ultimately refer to actual dogs, cats, horses, pigs, snakes, etc.; otherwise, it has no meaning.

Regarding computers doing mathematics, what I am getting at is the following: If one takes one cup of water and pours it into a glass, and then takes another one cup of water and pours it into the same glass, one then has two cups of water in the glass. But the glass did not do addition. The mental process of realizing that one cup of water poured into another cup of water gives two cups of water is addition.

Link to comment
Share on other sites

I'm not denying that a mathematician can operate on a very abstract level, using abstractions from abstraction from abstractions. But those abstractions don't mean anything unless they are tied to reality. Meaning does not come about because one has a symbol related to another symbol by means of a third symbol. The symbols related to symbols in the sentences I have just used have no meaning, unless they refer to something in reality. And it is only by doing this that one can say a statement is either logical or it is not.

And abstractions from abstractions are not empty, they contain all of the referents subsumed under the abstraction. A wide abstraction, such as "animal," contains other abstractions, such as "dog," "cat," "horse," "pig," "snake," etc. which ultimately refer to actual dogs, cats, horses, pigs, snakes, etc.; otherwise, it has no meaning.

Regarding computers doing mathematics, what I am getting at is the following: If one takes one cup of water and pours it into a glass, and then takes another one cup of water and pours it into the same glass, one then has two cups of water in the glass. But the glass did not do addition. The mental process of realizing that one cup of water poured into another cup of water gives two cups of water is addition.

This is a complete non sequitur.

Link to comment
Share on other sites

This is a complete non sequitur.

I don't understand what you are trying to say.

Are you saying wide abstractions are not tied to reality and therefore that they do not contain all of the abstractions subsumed under them? Or are you saying that just because you can do something in your head it must necessarily be tied to reality so long as one follows the rules of procedure? Or are you saying in your metalogic statement that following procedure is more important than being connected to reality?

Take the following set of statements that are in the form of a syllogism:

All zigwams are lious

All jetrinocs are zigwams

Therefore all jetrinocs are lious

The above is not a logical statement because the terms do not refer to anything in reality.

For example:

All pigs are green

All pencils are pigs

Therefore all pencils are green

The above is not a logical statement by the metalogic of Objectivism; precisely because all pigs are not green and because all pencils are not pigs. It's just complete and utter non-sense.

A supposedly logical statement is not logical unless the terms refer to something in reality in a non-contradictory manner.

Link to comment
Share on other sites

A supposedly logical statement is not logical unless the terms refer to something in reality in a non-contradictory manner.

I think you're mis-using the term "logic". Logic applies specifically to the method, and AR discusses concepts of method at least briefly in my copy of ITOE. They don't have to refer to things in reality, they are operations or transforms that, if you plug in any "thing", will yield a predictable result. For instance there is no such thing in reality as the square root of -1 . . . there is no possible way you can even abstract it from anything. However you can *use* the square root of -1 in math and use it to solve problems in a valid way.

The real test for bad logic is if you can plug in true pieces and get a false answer; that's what ultimately "ties" it to reality.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...