Jump to content
Objectivism Online Forum

.999999999999 repeating = 1

Rate this topic


WI_Rifleman

Recommended Posts

This is a topic I have seen on numerous other Internet forums, though I've never seen this level of accurate (philosophical) analysis before. I am not going to add any further comments towards solving the dilemma beyond what TomL quoted me as saying in our chat, but needless to say there are several levels at which we can fruitfully analyze this problem and I thank the other posters for shedding light on (the philosophic) aspects of identity and translation I had not considered.

I do propose that at this point we split the thread into two: one for discussing the philosophic analysis of the problem and one for people who are still not convinced that within the context of real analysis that .999... = 1.

As a side note, I have mentioned this topic to a handful of other (math) graduate students and a couple professors that I work with and almost without exception they say something equivalent to "who DOESN'T know that point nine repeating equals one?" (although chuckles and head nods were also a common response).

Best regards

Link to comment
Share on other sites

  • Replies 278
  • Created
  • Last Reply

Top Posters In This Topic

I do propose that at this point we split the thread into two: one for discussing the philosophic analysis of the problem and one for people who are still not convinced that within the context of real analysis that .999... = 1.

I second splitting the thread, but I suggest locking the thread containing real analysis, as it is clear now that indeed in mathematics 0.999~ = 1. I proved it by correctly applying the mathematical theorems, as well as several people before me (and among them is WI_Rifleman, thread starter, to whom I now apologize, because his proof is in fact valid, contrary to the claims I have made earlier).

Those who disagree that 0.999~ = 1 should study mathematics. Debate on this is superfluous, as this has been established as fact.

To Hal:

Mathematics is unambiguous. When there is a mathematical formula, or a symbol, it can only mean one thing and nothing else. Language we use, however, is a different story. This is a well known fact. I really don't see what's the point in your bringing this up, or the point of your whole argument for that matter.

Link to comment
Share on other sites

Mathematics is unambiguous. When there is a mathematical formula, or a symbol, it can only mean one thing and nothing else. Language we use, however, is a different story.
For example, "12"? The same fact that renders "12" infinitely ambiguous allows language to be ambiguous. The symbol "x" means different things when applied to sets vs. numbers.
Link to comment
Share on other sites

I second splitting the thread, but I suggest locking the thread containing real analysis, as it is clear now that indeed in mathematics 0.999~ = 1. I proved it by correctly applying the mathematical theorems, as well as several people before me (and among them is WI_Rifleman, thread starter, to whom I now apologize, because his proof is in fact valid, contrary to the claims I have made earlier).

I find it ironic that you would state that there should be no more debate because "it is clear now that indeed in mathematics 0.999~ = 1" even as you follow by mentioning that you were wrong earlier. It may be clear to you now, but that doesn't mean it has become immediately clear for everyone. There have been several proofs offered in this thread, and still not everyone realizes the truth. Infinity is a tricky business.

Those who disagree that 0.999~ = 1 should study mathematics. Debate on this is superfluous, as this has been established as fact.
Isn't this ignoring the fact that proof is ojective--that it always requires someone to be doing the proof? You have proven it for yourself, but not for other people--unless you would like other people to take your satisfaction as their source of truth.

Mathematics is unambiguous. When there is a mathematical formula, or a symbol, it can only mean one thing and nothing else. Language we use, however, is a different story. This is a well known fact. I really don't see what's the point in your bringing this up, or the point of your whole argument for that matter.

Mathematics, as with nearly every other human endeavor, is contextual. This means that the same symbol can have different meaning in different contexts. One example is the x that means multiply can also mean cross product depending on whether it is in a scalar or vector context. Read some mathematics papers--they are always saying "let [symbol1] mean [some operation], and [symbol2] mean [some other operation]." There just aren't enough unique symbols for every purpose anyone has even come up with.

Link to comment
Share on other sites

I see that David has beaten me to the punch with the example of "x". :)

I thought of another example of context: the base that a number is written in. That implied context is what makes this joke funny:

There are only 10 kinds of people in the world: those that understand binary, and those that don't.

Link to comment
Share on other sites

I find it ironic that you would state that there should be no more debate because "it is clear now that indeed in mathematics 0.999~ = 1" even as you follow by mentioning that you were wrong earlier.  It may be clear to you now, but that doesn't mean it has become immediately clear for everyone.  There have been several proofs offered in this thread, and still not everyone realizes the truth. Infinity is a tricky business.

I never said I was wrong earlier, I only said that I thought WI_Rifleman's proof was invalid. This doesn't mean I thought that 0.999~ != 1.

Isn't this ignoring the fact that proof is ojective--that it always requires someone to be doing the proof?  You have proven it for yourself, but not for other people--unless you would like other people to take your satisfaction as their source of truth.

Proven it for myself? What in the world does that mean? Are you now going to tell me that logic is arbitrary and that my proof "may be true for me but not for others?"

Mathematics, as with nearly every other human endeavor, is contextual.  This means that the same symbol can have different meaning in different contexts.  One example is the x that means multiply can also mean cross product depending on whether it is in a scalar or vector context.  Read some mathematics papers--they are always saying "let [symbol1] mean [some operation], and [symbol2] mean [some other operation]."  There just aren't enough unique symbols for every purpose anyone has even come up with.

Statements such as "let [symbol1] mean [some operation], and [symbol2] mean [some other operation]" are exactly what make mathematics unambiguous. By saying this, the author has made certain that he will be understood in exactly the way he wants to be understood. It is exactly the contextuality which leads to practical unambiguousness. I say practical because inventing and applying a new symbol which has never ever been used in mathematics for every [some operation] or [some other operation] is impractical.

Link to comment
Share on other sites

For example, "12"? The same fact that renders "12" infinitely ambiguous allows language to be ambiguous. The symbol "x" means different things when applied to sets vs. numbers.

Saying "12" doesn't make you a mathematician.

As for "x" it is true that in different contexts it means different things, but in each and every context, a mathematician can find out exactly what that "x" refers to and when grasped fully and correctly, this "x" will deliver exactly the same message as the author intended. This is what I meant by unambiguousness. With language this is much harder to achieve.

Link to comment
Share on other sites

Saying "12" doesn't make you a mathematician.
That's irrelevant: you claimed that mathematics is unambiguous; well, maybe you mean something other than "the symbolic expressions of math". In which case, I can legitimately say that language is unambiguous.
As for "x" it is true that in different contexts it means different things, but in each and every context, a mathematician can find out exactly what that "x" refers to and when grasped fully and correctly, this "x" will deliver exactly the same message as the author intended.
Fine: what is the unambiguous meaning of "AxB"?
Link to comment
Share on other sites

That's irrelevant: you claimed that mathematics is unambiguous; well, maybe you mean something other than "the symbolic expressions of math". In which case, I can legitimately say that language is unambiguous.

"12" isn't a mathematical statement. It is not a theorem or a formula or a definition, and certainly not an expression. It says nothing. It is merely a symbolic representation of an integer between 11 and 13.

Fine: what is the unambiguous meaning of "AxB"?

I can't tell you because you haven't defined these symbols.

Link to comment
Share on other sites

I can't tell you because you haven't defined these symbols.
Now how is mathematics any different from language? Once you define the terms you use in a statement of natural language, it's as clear as a mathematical equation. The only difference between language and math is a purely sociological one: mathematicians are much more likely to bother to define their symbols explicitly.
Link to comment
Share on other sites

Now how is mathematics any different from language? Once you define the terms you use in a statement of natural language, it's as clear as a mathematical equation. The only difference between language and math is a purely sociological one: mathematicians are much more likely to bother to define their symbols explicitly.

I dont think its a sociological thing; it would be almost impossible for natural language signs to be defined in the same way that mathematical signs are. People have been trying to give 'mathematics-style' definitions of English words since Plato - it just doesnt work.

Link to comment
Share on other sites

Proven it for myself? What in the world does that mean? Are you now going to tell me that logic is arbitrary and that my proof "may be true for me but not for others?"

There is no need to jump to conclusions and put words in my mouth. It means, very simply, that you yourself have proven it, and now you know it to be true. This does not mean other people do. It goes without saying in this forum that reality is independent and existence has primacy. I thought it went without saying that logic is not arbitrary, but apparently not. :(

You know, it is very unfortunate that the notion of "true for you" has been corrupted to imply subjectivity, because so many people then adopt its converse and end up with intrinsicism, missing objectivity entirely. In a very real sense, you can only prove things for yourself. Suppose you come up with a new proof for some fact of reality. You can then show that proof to others, and they can prove it to themselves, of course. But until they do that, you remain the only one who understands the connection to reality, and thus the only one for whom it is proven. (In other words, there are some aspects of reality of which everyone else is still ignorant.) Again, it should go without saying that your new truth--let's say that gravity accelerates at 9.8 m/s^2--applies to everyone else whether or not they believe it.

You can't do anyone else's thinking for them. All you can do is validate your thoughts to reality and try to help other people do that with theirs, but ultimately it is their own mind that will come to understand the truth of a proposition or not. This is what I meant when I said:

You have proven it for yourself, but not for other people--unless you would like other people to take your satisfaction as their source of truth.

That is why I was so bothered by this claim of yours:

Debate on this is superfluous, as this has been established as fact.

Debate on something is never superfluous, not as long as there are honest but confused people struggling to understand some truth. You may tire of the debate, but that is not what you said. I personally tired of the debate on whether 0.999~ = 1, so I didn't comment any longer. But that doesn't mean I expected everyone else to stop debating, just because I had become fully convinced.

Statements such as "let [symbol1] mean [some operation], and [symbol2] mean [some other operation]" are exactly what make mathematics unambiguous. By saying this, the author has made certain that he will be understood in exactly the way he wants to be understood. It is exactly the contextuality which leads to practical unambiguousness. I say practical because inventing and applying a new symbol which has never ever been used in mathematics for every [some operation] or [some other operation] is impractical.

Well, perhaps your claim here was intended to say that mathematics was contextual; however, that is not what it seems like to me:

Mathematics is unambiguous. When there is a mathematical formula, or a symbol, it can only mean one thing and nothing else.

(Emphasis mine.) If you meant "one thing and nothing else in a given context, I apologize for misunderstanding.

Link to comment
Share on other sites

{Preceding section omitted.}

Therefore if    X = 0.99999....repeating....9999

___then,        10X = 9.99999....repeating....9990

As in all cases where a number is multiplied by 10, all digits are shifted to the right by one position and the right most non zero digit is replaced by a 0.

Now, the value of 9X can be calculated as given below -

_9.9999....repeating....9990     

- 0.9999....repeating....9999

----------------------------------------

  _8.9999....repeating....9991

I am withdrawing everything I said in post number 45. (Quoted above)

My mistake was the fact that I "visualized" and assigned significance to the last element of an infinite sequence of digits of 9. An infinite sequence doesn't have an end point and therefore it would not be proper to give significance to such a nonexistent endpoint and subject it to arithmetic operations.

Link to comment
Share on other sites

The result 0.9999~ = 1 is very counter intuitive. The reason for this counter intuitiveness is a quirk that arises out of the manner in which concept of infinity has been used. Infinity comes into play here because the term 0.9999~ means that 0 is followed by a decimal point which is in turn followed by an infinite number of 9s.

If X = 0.9999~ then how can X equal 1 no matter how many 9s are placed to the right of the decimal point? Isn't that impossible?

But the fact that no amount of 9s being appended to the end of X can make it equal to 1 becomes irrelevant since the number of 9s in X is thought of as being "infinite". Infinity is not a number. Infinity is not an "amount" in the ordinary sense.

A method of representing this result which is not counter intuitive can be devised.

Let Y be the number where 0 is followed by the decimal point which is in turn followed by n digits of 9.(n is an integer)

Then, Y = 0.999..repeating n times.

Now X (0.9999~) can be expressed as:

X = LIMIT(n--->infinity) Y

X = LIMIT(n--->infinity) 0.999..repeating n times

The following result will not be counter intuitive :

[LIMIT(n--->infinity)0.999..repeating n times] = 1

Basically it would not be proper to say that (0.9999..repeating n times) will equal 1 when n is infinite.

(This is what the result 0.9999~ = 1 would imply.)

What we should say is :

As n approaches infinity (0.9999..repeating n times) approaches 1.

{This is what [LIMIT(n--->infinity)0.999....repeating n times]=1 would imply}

The difference between these 2 ways of expressing the same result may seem merely semantic and trivial. But it isn’t. It relates to the proper manner of using the concept of infinity.

One of the quotations of this website (OO.net) would be relevant:

I protest against the use of infinite magnitude as something completed, which in mathematics is never permissible. Infinity is merely a facon de parler, the real meaning being a limit which certain ratios approach indefinitely near, while others are permitted to increase without retriction.

--Carl Friedrich Gauss

http://quotes.rationalmind.net/random.php?...iedrich%20Gauss

Edited by shakthig
Link to comment
Share on other sites

Now how is mathematics any different from language? Once you define the terms you use in a statement of natural language, it's as clear as a mathematical equation. The only difference between language and math is a purely sociological one: mathematicians are much more likely to bother to define their symbols explicitly.

Consider the metaphor as an example. You are using language to describe something, but you are using the terms which mean something qute different. So somebody's emotions might suddenly become a wild ocean. This is an example of the language ambiguousness. In mathematics, there is no such things. Once the symbols are defined, they are used as defined and nothing else.

Another thing with language is that even if you have clearly defined the terms you are using, and you explain something with them, you may not be understood in the way you mean what you are saying. Yes, this has to do with (not) defining your terms and if you define them you may be understood correctly. However, the thing about language, unlike mathematics, is that you don't define your terms. You take them to mean what you think they mean, and you explain things in your terms. Thus while speaking, you leave room for ambiguousness. And as Hal noted, this is not purely a sociological thing.

This, however, is not to say that it is impossible to achieve clarity. To do that, however, you need to invest extra effort and - and this is very important - others need to make effort to understand you. This is why in spoken language, this is often an impractical thing to do and nobody will listen (except if you are lecturing students in class, or somebody explicitly asks you for a definition). You must make due with the words and their definitions at hand, and get your point across this way, then deal with possible misunderstandings later.

This is the ambiguity in language I was referring to. It can be avoided, but there is no formalized method saying how to do this. Logic can help, but not give a definite answer on how to do it.

Link to comment
Share on other sites

I said that 'the sum does not equal 1' is inconsistent with 'the sum equals 1'. Your response is that the denier may have some English language sense of 'equals' that is different from the mathematical sense symbolized by '='. But whatever this English language sense might be, if it entails that the sum does not equal 1, then this sense is inconsistent with the mathematical sense, hence it is not relevant to the mathematics under discussion.

In mathematics, 2 things are defined to be equal if they can be made arbitrarilly close.

That is incorrect.

First, in first order logic, if '=' is taken as a logical symbol, and 't' and s' are terms, then the sentence 't = s' is true in a model iff 't' names the same object in the universe of the model as named by 's'. So equality in mathematical logic reflects our regular English understanding of equality. ( If '=' is not taken as a logical symbol with the fixed meaning just mentioned, then there is a limitation in first order logic, since then no first order theory of identity could preclude that equivalence classes are identical objects in the model. That is, no first order theory can, with identity being merely a non-logical predicate, define identity in models.)

Second, rather than asserting that things are equal if they can be made arbitrarily close, mathematics (set theory and analysis, which is reducible to set theory) asserts, as an axiom, that things are equal iff they have the same members. And this captures the usual English sense of set equality. The 'arbitrary close' (actually, there's more to it than arbitrary closeness) criterion is used only for certain KINDS of things that are PROVEN to be equal to one another on the basis of convergence. This does not dispute the usual English meaning of 'equal' since it is not arbitrary closeness alone that provides equality but rather equality (ultimately, in the set theoretic sense of having the same members) is proven for certain kinds of things by virtue of the convergence of a function.

For all eps > 0, there exists an integer N > 0 such that for all n > N, the nth partial sum of the series (0.9 + 0.09 + 0.009 + ...) is less than eps.

Now, how do we translate from this formal language into English? We can say either that the series equals 1, or that it gets arbitrarily close to 1. Both statements agree with the maths, they just question the English translation.

No, because you're leaving out the mathematics previous to this: Theorem: If a sequence converges as described above, then it converges to a unique real number; and Definition: The limit of the sequence of partial sums is the real number to which the sequence converges. The sequence converges to 1. So the limit of the sequence IS 1. True, the terms of the sequence only get closer and closer to 1. But such a convergent sequence is proven to have a unique real number as its limit. In this case, the limit is 1. If it is understood that this is what is asserted by the sum being equal to 1, then it's silliness to say that one can deny this in English but affirm it in mathematics. To say, as yet another poster recently did, that things get arbitrarily close but that there is no equality is just ignorance of the mathematics. There is equality at the limit, and the limit is proven to exist. People who deny this just need to grow up, as they can start by just opening a math book.

No matter what your formalization is like, youre still going to have to use a natural language to explain it to others. And then the potential for disagreements arises. Again, consider how strongly people can object to something as simple as translating => into 'implies'.

Such simple theorems as that of the existence of limits can only be denied by those who can't or won't read a basic textbook. And not only are simple proofs of such theorems theoretically checkable by such as a Turing machine, but, practically, in real life, just as a Turing machine is meant to be an abstract paradigm of calculability, any clerk who can use an adding machine could, over a period of just a few months, follow the steps from the axioms of first order logic to verify that the axioms of set theory imply the existence of limits of convergent real valued sequences.

And your example of disagreements about the meaning of implication is answered by just noting that mathematics uses a semantics for the symbol '->' that is fixed. Of course, anyone is free to use a different semantics. But that's not what is at stake. All mathematical logic does is show us that IF we use the standard semantics, THEN implication works a certain way. To think that disagreements about what English 'if-then' means entails ambiguity for mathematics itself is fundamentally mistaken.

And the method of models does give a method for understanding formal language. And though there is no perfect method for "translating" formal language into natural language (especially since there is not even a claim or even an aspiration that formal languages admit of such translations), at the end of the day, when we point to the formula on the blackboard that the sum equals 1, then, as I said, there is no reasonable denial of it.

But, perhaps you'd say, if formal language can't provide natural language interpretation, then formal languages fail the job. Then what does NOT fail the job? Natural languages themselves, obviously, give rise to too much and too great ambiguity and grounds for interminable disputes. At least formal languages give us formulas to which we can point and say, "Whether this captures your natural language sense of things or not, the formula is a theorem." Moreover, any fairly intelligent person, whether a student or professional mathematician, can follow the formulas to understand what they're meant to convey. That there is no perfect method for reaching this understanding is not an argument against (1) the objectivity that formalization provides (an objectivity that is embodied syntactically by our being ensured an effective method to check that an argument is a proof, and semantically by the method of meaning being given by models) and (2) the fact that just about anyone who is fairly intelligent can understand the meanings of the formulas, and in a great number of instances, the formulas are not just understandable, but easily understandable.

I would hold that the 'meaning' of mathematical terms is contained just as much in the images and ideas people have in their heads as their formal definition.

That's fine. I don't know of anyone who claims that formalization replaces imagery and such. Mathematics is an activity of human imagination and visualization, of course. That's not inconsistent with formalization. A crude analogy: the study of architecture requires imagination, but once the design has been imagined, in order for it to be evaluated objectively, it needs to be symbolized.

Mathematicians often reason visually/geometrically - they dont manipulate strings of symbols.

Different mathematicians do different amounts of both. No one claims, including proponents of the most narrow philosophy of formalism, that mathematicians shouldn't use imagination and visualization. That would be ludicrous. Cogitation through visualization and formalization. Those are two different aspects of mathematics. Without deep understanding as through visualization, there'd probably be very little accomplished in mathematics. And without formalization, there is a lack of a method of objective verification that the results of these cogitations are indeed sentences that are provable or true as provided by the posited truth of axioms. However a mathematician comes up with his results is his business. But mathematical journals don't publish mere mathematical cogitations in whatever prose the mathematician wishes to convey his cogitations. Rather, mathematical journals publish proofs of theorems and axiomatizations of new theories as these proofs and axiomatizations are given in semi-formal mathematical prose that can be seen to admit of formalization.

The purpose of formalization is to give a rigorous definition of ideas and concepts that already exist anyway.

First, even if that were true, it wouldn't invalidate formalization. Probably the primary purpose of formalization is to make mathematical arguments subject to objective verification.

Second, from formalization and rigorous axiomatization have come entire fields of mathematical ideas. While the first task of formalization probably is to make existing notions rigorous, from this task itself has come a wealth of mathematical ideas.

[continued next post...]

Link to comment
Share on other sites

Mathematicians had no problems working with real numbers for centuries before Dedekind gave us a formal definition.

I don't think it is true that there was a problem-free theory of real numbers for centuries prior to Dedekind. Anyway, problems or not, one of the things the construction of the reals provides is a comprehensive and lucid understanding of how the reals work in relation with the rest of mathematics, and free of such nebulous notions as that of the infinitesimal. (That comment does not reflect upon theories that may give precise definitions of 'infinitesimal' nor upon non-standard analysis, which stems from the fact of non-standard models.) It may not matter to you personally, but mathematicians have enjoyed and built upon this accomplishment. And for people like me who would like to just understand mathematics, such accomplishments greatly assist our understanding. Instead of just being given a list of claimed properties of certain number systems, we are given justification and reason and understanding of why these properties hold and how the many systems all fit together. This is a deeper basis, and for many mathematicians, not just a more intellectually satisfying basis, but a more productive one as it has given rise to so many of the discoveries of twentieth century mathematics.

And mathematicians today do not think of real numbers as being 'cuts' or 'cauchy sequences' while they manipulate them, unless perhaps they are teaching a class on foundations of mathematics.

So what? A carpenter doesn't think of wood and nails as being molecules and atoms while he builds a house. Anyway, the construction of the real numbers by method of equivalence classes of Cauchy sequences or as Dedekind cuts is standard textbook mathematics. In fact, one of the purposes of these constructions is to prove that the reals thus constructed have the properties that the mathematician needs them to have to be able to work with them without having to think about how they were constructed.

One can approach the reals as if they are given axiomatically, by fiat, as some existing complete ordered field. However, one might ask the question: What does it mean to say that such a complete ordered field exists? Mathematically, this existence is proved by constructing the reals set theoretically. Hey, if you or anyone isn't interested in this, then fine. But I don't see what is the valid objection to proving things from axioms!

Moreover, even if one takes the existence of a complete ordered field as a given, to do calculus, one still has to use sets, functions, etc. In this context, the construction of the reals ties the study all together.

Further, while a mathematician doesn't have to know the construction of the reals to work with the reals, probably a mathematician doesn't have to know the proof of the existence of limits to work with limits. So, following your reasoning, one could say at just about any point in the exposition of a mathematical theory, "You don't need to know anything that came before this point; since all you need to do is work with what's now been established from this point on." And that's for the most part true! Usually, if you want to pick up a mathematical thread at any point, if you trust what's been established, you can continue on "as if" what's been established is given axiomatically or by your own insight or whatever. But, again, so what? If you want to work with the reals as a given, fine. And if other people want to construct the reals first, then what exactly is your problem with that?

If one trusts the Pythagorean theorem to work, then one can use it without knowing of its proof. But some people, such as ancient Greeks and mathematicians to this day, are interested in proving such generalizations as the Pythagorean theorem. And this business of proving things has been, ever since the Greeks, almost entirely what mathematics is. And one of the things mathematicians have proven is that, from the axioms of set theory, there exists the set of real numbers. Again, what exactly is your problem with this?

I wouldnt say mathematics is empirical, but its formalization is guided by empirical concerns.

I don't think anyone denies that the impetus toward mathematics is man's desire to transform the physical world. Along the way, people who do mathematics often formalize. This need arises from at least a few bases (not necessarily in order of importance): (1) To keep record of the mathematics in concise and unambiguous notation, (2) To understand the precise relations among the mathematical ideas, (3) To make objective the verification of the correctness of mathematical arguments, (4) To hone mathematics down to a few simple and transparent principles so that the truth of all that is derived from these principles rests only upon the truth of these principles, (5) To satisfy intellectual interests.

Luckily, someone managed to sneak it [a certain mathematical notion] in by treating it as a limit of a sequence of functions. But if noone had managed to do this, we would just have revised the definition of 'function' in order to include the delta function (ie we would have changed the formalization because it was useful to do so).

Without comment on the function you mentioned, I pretty much agree with what you've said here. Yes, one adopts those axioms that one needs to develop the theory one wants to have. If there's a sentence that is not a theorem from your axioms, and if you want that sentence in your theory, then you adopt an axiom or axioms that imply that sentence. Of course.

And if science needed the continum hypothesis to be resolved one way or the other, we would do it.

I agree with the gist of this, since the continuum hypothesis is independent of ZFC, if we want the continuum hypothesis as a theorem, then we have to adopt an axiom or axioms that imply the continuum hypothesis, but, other than that, I don't know what you mean by "resolved".

If Dedekind hadnt managed to produce his definition of real numbers, we would just include 'real numbers' as being an irreducible undefinable term within our formalization.

If the real numbers could not be constructed from the axioms of set theory, then, if we want to have them, we'd have to adopt stronger axioms. Yes, that's true. So what? The real numbers are constructible from the axioms of set theory. Your point seems to be that whatever we can't construct, we can just take axiomatically instead. There is no mathematician who does not know that!

Complex numbers became accepted because it was shown that you could do interesting things with them - by the time Hamilton (?) gave a formal definition in terms of ordered pairs, they had already become a standard part of the mathematicians toolkit.

The natural numbers were part of a mathematician's world before formalization, as were circles and squares and all kinds of things. If one claimed that mathematics is impossible without formalization, then you'd have a point. But no one is claiming that. In general, you keep knocking down all these strawmen. You keep wanting to dispute things that no one, to my knowledge, has ever claimed.

Link to comment
Share on other sites

Mathematics is unambiguous. When there is a mathematical formula, or a symbol, it can only mean one thing and nothing else.

Formulas have unique readability, which is a syntactical feature that we take to be a form of unambiguousness, as well as there are effective methods for checking that arguments are proofs. But semantically, even in the most formal mathematics in first order logic, sentences and theories have meaning only per a structure, and sentences and theories have more than one model. And some theories have models that are not isomorphic with one another, so these theories are, in this sense, definitely ambiguous.

Infinity is a tricky business.

Especially if we don't have axioms for it.

Now how is mathematics any different from language? Once you define the terms you use in a statement of natural language, it's as clear as a mathematical equation. The only difference between language and math is a purely sociological one: mathematicians are much more likely to bother to define their symbols explicitly.

Natural language does not have effective (by Church's thesis, meaning 'recursive' etc.) methods to check syntax such as for well-formedness and whether an argument is a proof. Moreover, anyone who has dealt with, for example, bound variables in formulas of even medium complexity, knows that it is not true that for any given mathematical formula one can, as a practical matter, convey it unambiguously in natural language.

Debate on something is never superfluous, not as long as there are honest but confused people struggling to understand some truth.

I quite agree. However, in some cases we may need to judge whether the participants are honestly confused or whether they are confused due to a willful refusal to inform themselves about some basic mathematics.

Edited by LauricAcid
Link to comment
Share on other sites

Formulas have unique readability, which is a syntactical feature that we take to be a form of unambiguousness, as well as there are effective methods for checking that arguments are proofs. But semantically, even in the most formal mathematics in first order logic, sentences and theories have meaning only per a structure, and sentences and theories have more than one model. And some theories have models that are not isomorphic with one another, so these theories are, in this sense, definitely ambiguous.

Thanks for this. I was referring to this what you call "unique readability" as being unambiguous, as I have explained in subsequent posts, however, I'm not sure with how much clarity. It is rather difficult for me to study mathematical terms in Croatian, then talk about them in English.

Link to comment
Share on other sites

Debate on something is never superfluous, not as long as there are honest but confused people struggling to understand some truth.

de·bate

v. de·bat·ed, de·bat·ing, de·bates

v. intr.

  1. To consider something; deliberate.

  2. To engage in argument by discussing opposing points.

  3. To engage in a formal discussion or argument. See Synonyms at discuss.

  4. Obsolete. To fight or quarrel.

Debate on this, as well as on any established fact, is superfluous. Someone who doesn't understand the proof and asks a question about it doesn't start a debate, he shows that he doesn't understand it. In other words, in order to understand it, he needs to learn the definitions and theorems leading to this proof.

That something is debatable means that there are certain points which haven't been cleared. Mathematical proof eliminates all such points, which means debate is not only superfluous, but also impossible.

If you meant "one thing and nothing else in a given context, I apologize for misunderstanding.

I thought that by default we speak of things which are valid in a given context, and that it is the opposite - when we speak of things which are valid in any context - that needs to be stressed. If it is the other way around, I apologize.

Edited by source
Link to comment
Share on other sites

mathematics (set theory and analysis, which is reducible to set theory) asserts, as an axiom, that things are equal iff they have the same members

That assumes that all "things" are sets--which is a completely arbitrary assumption to make. I have argued against it extensively here.

Link to comment
Share on other sites

1. Usually set theories, such as Z set theory, don't even mention sets. The formal theory itself does not declare that the domain of interpretation be a domain of things that are only sets. Sets are the intuitive and intended interpretation, but such an interpretation is not required and the formalization of mathematics does not depend upon it. And even theories such as NBG that have a predicate that is intuitively understood as standing for sethood do not commit that the formal predicate symbol be thus interpreted.

2. Moreover, one can quite easily adjust Z set theory so that it allows individual entities that are not sets, and without significant change to the mathematics such as under discussion here.

3. And even if the domain were limited to sets, then this is not a philosophical assumption that only sets exists. It is not required that a theory be a theory of all possible existents. One may, for example, have a theory in which the intended domain is beach balls and to describe how beach balls bounce among themselves, to take a silly example. This does not entail that one believes that beach balls are the only things that exist as one may offer theories to account for other domains.

4. The axiomatization of mathematics by use of set theory is just that: an axiomatization. So, if one prefers a non-set theoretic axiomatization then one is free to present one.

5. One of the reasons set theory is limited, in its intended and intuitive interpretation, to sets is that sets are all that are NEEDED for mathematics. In other words, since in this context set theory is proposed not as an account of all existence but only of mathematics or abstract mathematical objects and their relations, there's no need to have the theory account for or even include all existents. It turns out that we do not need cartoon characters and crumbs of food and empty cans of soda and all other things in the universe to make a theory about mathematical functions and real numbers.

/

As to the thread you linked to, I am the thread starter. I did not respond to your last post as I was disinclined to post in that forum after I had certain communications with the forum administrator. That said, your post there reflects that you have some serious misconceptions about this subject.

/

Note: My previous mention of Church's thesis was unintentionally gratuitous, since the thesis is not required to assert that anything that is Church-Turing computable is indeed computable, but rather the thesis asserts the converse.

Edited by LauricAcid
Link to comment
Share on other sites

Here's a proof:

Definition: .999... = lim(k = 1 to inf) SUM(j = 1 to k) 9/(10^j).

Let f(k) = SUM(j = 1 to k) 9/(10^j).

Show that lim(k = 1 to inf) f(k) = 1.

That is, show that, for all e > 0, there exists n such that, for all k > n, |f(k) - 1| < e.

First, by induction on k, we show that, for all k, 1 - f(k) = 1/(10^k).

Base step: If k = 1, then 1 - f(k) = 1/10 = 1(10^k).

Inductive hypothesis: 1 - f(k) = 1/(10^k).

Show that 1 - f(k+1) = 1/(10^(k+1)).

1 - f(k+1) = 1 - (f(k) + 9/(10^(k+1)) = 1 - f(k) - 9/(10^(k+1)).

By the inductive hypothesis, 1 - f(k) - 9/(10^(k+1)) = 1/(10^k) - 9/(10^(k+1)).

Since 1/(10^k) - 9/(10^(k+1)) = 1/(10^(k+1)), we have 1 - f(k+1) = 1/(10^(k+1)).

So by induction, for all k, 1 - f(k) = 1/(10^k).

Let e > 0. Then there exists n such that, 1/(10^n) < e.

For all k > n, 1/(10^k) < 1/(10^n).

So, |1 - f(k)| = 1 - f(k) = 1/(10^k) < 1/(10^n) < e.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...