Jump to content
Objectivism Online Forum

But what does .9999999999… MEAN?

Rate this topic


Recommended Posts

My posting is about the importance of definitions. We had a thread about .9999999999… which extended over 5 years – unnecessarily so. This happened, in part, because sometimes the concepts, terms and the notations used were not fully defined or explained.

Before discussing if .999… is or is not 1 (or equal to 1), we must specify and agree about what we do mean by this string of signs, the mysterious part being the three dots at the end.

The problem is that the notation ".999…" has no obvious meaning by itself, in contrast with 1, 2, 1000, … and (hopefully) also 1/9, 9/10, 999/1000 and other palpable numbers.

For example, one might interpret ".999…" as some kind of a process: one repeatedly adds, say every second, a "9" at the end. One gets:

.9, .99, .999, .9999 , ….., .99999999999, …..

These numbers approach 1 more and more closely every second; for example, the last written one differs from 1 only by .00000000001. However, these newly added numbers will never ever equal to 1.

Therefore, if we look at ".999…" as this kind of unending process, then, clearly,

.999… is not equal to 1

But the notation ".999…", and also ".(9)" or ".9999999 repeating", is used in mathematics with a completely different, and very precise, meaning. Namely, it is used to mean "the limit" of the following sequence of numbers:

.9, .99, .999, .9999 , …, .99999999999, …

The concept of limit is quite a technical one; it was explained on this forum a couple of times, but the details are not important. What is important, however, is that:

- the concept of "limit" can be defined quite rationally and consistently,

- and, when used, provides (generates, produces, results in) a specific number

Then, the notation ".999…", understood as the limit, is something specific, that is has a specific value, like 1, or maybe 5, 0, 1/2, 99/100, etc.

Now: using the technical definition of the concept of limit, one can show (it was already done on this forum) that this specific limit is equal to 1.

Therefore, if we understand at ".999…" as a limit, then, clearly,

.999… = 1

To avoid misunderstandings: the line above does not mean that the left hand side “tends” to 1, or “approaches” 1, or something like that. It means that it is exactly 1.

To summarize:

when we write .999… = 1, we have here both a definition and a theorem. The definition is about what we do mean by .999… (we mean the corresponding limit), while the theorem is about what this limit is (it is 1).

Alex

Link to post
Share on other sites
  • Replies 129
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Okay, let's skip all the drama. The decimal 0.(9) is referring to a 'completed' infinity. The most important question to be asked is this: Since you can't measure something with infinite accuracy

Due to the constructibility of complex numbers from real numbers, real numbers from rationals, rationals from integers, integers from naturals, and naturals from sets, then if there is to be any philosophical issue at stake, the issue must be: Some objection to set theory or some objection to one of the constructions. If there is any objection to either, I can only imagine it comes in the form, why would we construct a mathematical system that is like that? Here, the answer must always be, "Because it serves a purpose." In the case of the natural numbers, it serves counting; in the case of integers, it serves counting quantity from a privileged point (namely, 0); in the case of rationals, it serves quantifying ratios; in the case of reals, it serves measuring without assuming a least unit of measurement, or imposing other geometrical constraints on the objects measured (thus making the system extremely general and applicable to many objects); in the case of complex numbers, I don't think I have such a succinct way of stating its purpose, but it is useful in representing rotations in R2 which appear in electrical engineering, quantum physics, and so on. How these mathematical structures succeed in representation is an interesting question, the answer to which is--I suspect--that there is, so to speak, an homomorphism between the behavior of our mathematical structures and aspects of reality.

Link to post
Share on other sites

My posting is about the importance of definitions. We had a thread about .9999999999… which extended over 5 years – unnecessarily so. This happened, in part, because sometimes the concepts, terms and the notations used were not fully defined or explained.

Before discussing if .999… is or is not 1 (or equal to 1), we must specify and agree about what we do mean by this string of signs, the mysterious part being the three dots at the end.

The problem is that the notation ".999…" has no obvious meaning by itself, in contrast with 1, 2, 1000, … and (hopefully) also 1/9, 9/10, 999/1000 and other palpable numbers.

For example, one might interpret ".999…" as some kind of a process: one repeatedly adds, say every second, a "9" at the end. One gets:

.9, .99, .999, .9999 , ….., .99999999999, …..

These numbers approach 1 more and more closely every second; for example, the last written one differs from 1 only by .00000000001. However, these newly added numbers will never ever equal to 1.

Therefore, if we look at ".999…" as this kind of unending process, then, clearly,

.999… is not equal to 1

But the notation ".999…", and also ".(9)" or ".9999999 repeating", is used in mathematics with a completely different, and very precise, meaning. Namely, it is used to mean "the limit" of the following sequence of numbers:

.9, .99, .999, .9999 , …, .99999999999, …

The concept of limit is quite a technical one; it was explained on this forum a couple of times, but the details are not important. What is important, however, is that:

- the concept of "limit" can be defined quite rationally and consistently,

- and, when used, provides (generates, produces, results in) a specific number

Then, the notation ".999…", understood as the limit, is something specific, that is has a specific value, like 1, or maybe 5, 0, 1/2, 99/100, etc.

Now: using the technical definition of the concept of limit, one can show (it was already done on this forum) that this specific limit is equal to 1.

Therefore, if we understand at ".999…" as a limit, then, clearly,

.999… = 1

To avoid misunderstandings: the line above does not mean that the left hand side “tends” to 1, or “approaches” 1, or something like that. It means that it is exactly 1.

To summarize:

when we write .999… = 1, we have here both a definition and a theorem. The definition is about what we do mean by .999… (we mean the corresponding limit), while the theorem is about what this limit is (it is 1).

Alex

Pat Corvini discusses in depth continuous quantity on her 2, 3, 4 and all that, the sequel. Just because you are familiar with the method, i.e., that you can continue to add another 9 to the sequence, suggests that you can do so for a very long time (at least until you fall asleep, or if you are very persistent, until you pass away) without exhausting what the '...' implies. Every time you expand the quantity of decimals, thus refining the resolution or precision desired, is only meaningful while you have the capability to define the new unit you desire to invoke. In the context of length, with a micrometer - a calibration of 0.9999" becomes difficult to confirm. More precision can be brought about with the aide of an electron microscope. When you can no longer perceive the unit, even with the aide of sophisticated instruments, then you are merely invoking the method without any corespondence to reality, which is to say that it is no longer meaningful, or defined.

Link to post
Share on other sites

I'm pretty sure most people were clear about what "..." means. The part that was unclear, even for me for a long time, is that while the two terms "1" and "0.999..." are interchangeable mathematically, they are not interchangeable outside of that context - the concepts of decimals and limits are confined to a specific context, while counting numbers have a wider context of applicability.

The confusion came simply from equivocating on which concept is meant by the term "1" - the familiar concept derived from counting objects in the real world, versus the concept with a very specific function in mathematics. Folks asserting that 1 is 0.999... - regardless of context - are simply making a nonsensical assertion, since "0.999..." has a very specific context in which it makes sense (as you've explained), while "1" refers to different concepts with their own applicable contexts.

To avoid confusing these contexts, one should simply write out the word form when referring to counting (e.g. "one", "two", "three" apples), and write the arabic symbol when referring to the mathematical context. Thus: "one is not 0.999..."

Edited by brian0918
Link to post
Share on other sites

Pat Corvini discusses in depth continuous quantity on her 2, 3, 4 and all that, the sequel. Just because you are familiar with the method, i.e., that you can continue to add another 9 to the sequence, suggests that you can do so for a very long time (at least until you fall asleep, or if you are very persistent, until you pass away) without exhausting what the '...' implies. Every time you expand the quantity of decimals, thus refining the resolution or precision desired, is only meaningful while you have the capability to define the new unit you desire to invoke. In the context of length, with a micrometer - a calibration of 0.9999" becomes difficult to confirm. More precision can be brought about with the aide of an electron microscope. When you can no longer perceive the unit, even with the aide of sophisticated instruments, then you are merely invoking the method without any corespondence to reality, which is to say that it is no longer meaningful, or defined.

I think some confusion comes from the idea that every entity in the real numbers refers to something, and so .999999999999999999999999999999 has to refer to a real-world length of this measure, for some unit. This is not so. The mathematical structure is able to describe arbitrarily small lengths, some of which may possibly not exist. Moreover, even though the real numbers are most commonly used to describe distance, that's not their only use. Note the study of probability, or the measurement of force by a real number. The real number system does not describe anything, but instead is left in complete generality to be useful as a language to describe many different things. So one cannot object to saying that 9/10 + 9/100 + 9/1000 + ... doesn't actually refer to an infinite sum, but only an arbitrarily large sum, because nothing in reality is an infinite sum. It is a feature of the mathematical structure that some elements are infinite sums. The only way to object to this, would be to object to the construction of real numbers, but the construction is almost entirely derivative of the construction of rationals, and so on as I explained above. The only way you might object to this construction is to object to the use of sets which have infinitely many members, but then you would be objecting to the principle which allows us to construct the natural numbers--and presumably you do not have a problem with the idea that there are infinitely many natural numbers, rather than there being arbitrarily many, as if you create new natural numbers as soon as you need a very large number of them.

Now you might object, why would anybody construct a mathematical system that contains elements which are represented as the infinite sum of some other elements in the structure, but again I believe this comes down to the usefulness of the structure and its extreme generality of subject.

Link to post
Share on other sites

I think some confusion comes from the idea that every entity in the real numbers refers to something, and so .999999999999999999999999999999 has to refer to a real-world length of this measure, for some unit. This is not so. The mathematical structure is able to describe arbitrarily small lengths, some of which may possibly not exist. Moreover, even though the real numbers are most commonly used to describe distance, that's not their only use. Note the study of probability, or the measurement of force by a real number. The real number system does not describe anything, but instead is left in complete generality to be useful as a language to describe many different things. So one cannot object to saying that 9/10 + 9/100 + 9/1000 + ... doesn't actually refer to an infinite sum, but only an arbitrarily large sum, because nothing in reality is an infinite sum. It is a feature of the mathematical structure that some elements are infinite sums. The only way to object to this, would be to object to the construction of real numbers, but the construction is almost entirely derivative of the construction of rationals, and so on as I explained above. The only way you might object to this construction is to object to the use of sets which have infinitely many members, but then you would be objecting to the principle which allows us to construct the natural numbers--and presumably you do not have a problem with the idea that there are infinitely many natural numbers, rather than there being arbitrarily many, as if you create new natural numbers as soon as you need a very large number of them.

Now you might object, why would anybody construct a mathematical system that contains elements which are represented as the infinite sum of some other elements in the structure, but again I believe this comes down to the usefulness of the structure and its extreme generality of subject.

And this differs from 'concept of method' how?

Link to post
Share on other sites
Due to the constructibility of complex numbers from real numbers, real numbers from rationals, rationals from integers, integers from naturals, and naturals from sets, then if there is to be any philosophical issue at stake, the issue must be: Some objection to set theory or some objection to one of the constructions.
Possibly you could show that the objection is "mathematically equivalent" to an objection to set theory or an objection to a construction, but that would not be a valid proof that that's what the objection is. The objection is the ontological claim -- either a false claim, or the lack of a claim. And the objection is not of the form "you're violating my rights by doing this", it's more like the objection to masturbation, that it's an inferior substitute for the real thing.

There is no objection if one recognizes that "This is something that you can also do with this method", just as "unicorn" is something you can do with methods of imagination.

Link to post
Share on other sites

And this differs from 'concept of method' how?

I don't know--what's the concept of method, besides a book I never read?

Possibly you could show that the objection is "mathematically equivalent" to an objection to set theory or an objection to a construction, but that would not be a valid proof that that's what the objection is. The objection is the ontological claim -- either a false claim, or the lack of a claim. And the objection is not of the form "you're violating my rights by doing this", it's more like the objection to masturbation, that it's an inferior substitute for the real thing.

There is no objection if one recognizes that "This is something that you can also do with this method", just as "unicorn" is something you can do with methods of imagination.

If the objection is that there is no such number, ontologically, then that's not an objection to the assertion that there is a real number which is equal to an infinite sum, since mathematics does not presuppose the ontological existence of its entities. That's just crazy Platonism.

I have no idea why you brought up the violation of rights or masturbation. So I'm going to just leave that be.

But if there is an objection to the use of real numbers, and real numbers just are defined as a construction out of rationals, and so on, then any objection has to be to the construction or the foundation of the construction. There is no other point at which one might challenge the use of real numbers, since there just isn't anything else to them.

Link to post
Share on other sites

Math is not my strong point so please bare with me and don't throw too many advanced abstractions at me in response to what I am about to say...

How can it possibly be said that .999999999999..." = 1

when 1 is a full unit

and .9999999999999..." is not a full unit?

In many cases it may be pragmatic to substitute .999999999999..." for 1 perhaps and you may get results that are indistinguishable.

But pragmatism doesn't make something actual does it?

Link to post
Share on other sites

One response is that this is just what the number 1 and the number .999... mean in the system of real numbers. To convince you of that, I'd have to take you through Dedekind cuts and actually construct the real numbers for you, which would be extremely painful for a non-mathematician to endure. However, another way of explaining it is easy if you will accept the following principle: For any two real numbers, a and b, if |a - b| < |e| for arbitrarily small |e|, then a = b. Remember that |a - b| is basically the distance between a and b. So what this principle says is that, for any real number, the only real number which is arbitrarily close to it is itself. A stronger way of putting it is: Two numbers are distinct if and only if there is some distance between them.

I will also make use of the fact that, for any two real numbers, there is a rational number in-between them.

Now before I give the full proof, I'll just say that it should be obvious that 1 - 9/10 - 9/100 - ... is smaller than any positive real number, i.e. there is no distance between these two numbers on the number line. If you get that intuitive idea and agree with it, then you don't need to bother with the details of the proof below, but I provide it anyway as a guarantee that a proof exists. There are also other proofs that rely on other assumptions about the real numbers which may be preferable, but I think this is the one that gives you the most intuitive picture of why the two are equal: They are, in some sense, so close together on the real number like that they cannot be distinct.

Also note that both of the principles to which I'm appealing for this proof are immediate results of the construction of real numbers by way of Dedekind cuts.

So I will now prove that, for any arbitrarily tiny real number |e|, the distance between 1 and 9/10 + 9/100 + ... is less than |e|, which will prove that the two are equal. The proof is not hard, since the distance between 1 and 9/10 + 9/100 + ... is |1 - 9/10 - 9/100 - ...| = 1/10 - 9/100 - 9/1000 - ... = 1/100 - 9/1000 - 9/10000 - ... and so on. Now we take |r| to be any rational number between 0 and |e|, so |r| = p/q for some natural numbers p and q which are reduced to their least values (i.e. they share no common factors). Whatever q is, there is some number 10^s which is larger than q, and we will eventually get to a point in calculating 1 - 9/10 - 9/100 - ... where we represent this as 1/(10^s) - 9/(10^(s+1)) - ... since we will obviously get to every power of 10 by proceeding in the way we have above. It is obvious from our knowledge of rational numbers that this is equal to 1/(10^(s+1)) - 9/(10^(s+2)) - ... and that this is greater than p/q since the only positive part has a numerator that is the smallest possible integer and the denominator is larger than q. Thus |1 - 9/10 - ...| < p/q < |e|.

[edit: added the word "positive" to turn a sentence from false to true.]

Edited by aleph_0
Link to post
Share on other sites

Hopefully everyone's convinced that the real number formalism is okay at this point. A reason to accept the usual construction of the real numbers is that it formalizes solving problems by use of sequences and iterations, that is, it's the simplest number system containing the rationals that's complete.

To see how the need for such a definition might arise in practice, consider the really easy linear equation

x = 0.1x + 0.9

This clearly has solution x = 1. You can think of this problem as asking "What number, when you divide by 10 and add 0.9, gives itself back?" Now other more complicated problems for which there are no explicit solutions often make use of an iterative procedure: plug a guess into the right hand side, use the result and plug it in again, etc, and hope that the answer gets closer and closer to something. You can actually approximate square roots this way pretty effectively, for example.

If you try this for the above toy problem with the initial guess x = 0, you get the "defining sequence" of 0.999..., namely if f(x) = 0.1x + 0.9, then

f(0) = 0.9

f(f(0)) = 0.99

f(f(f(0))) = 0.999

...

You can actually prove from the equation that this sequence is "Cauchy", and so we can talk about the "limit" of this sequence, which really ought to "be" the solution to the original equation (that is, an x for which x = f(x)), in the sense that going far out enough into the sequence gives you as good of an approximation as you want to the solution (that's what "Cauchy sequence" means). But the limit is also what is typically referred to as 0.999..., so we should regard them as being the same thing in the sense of real numbers. Now if you started with another guess initially you'd get another sequence that still converges to 1, which is why they define real numbers as equivalence classes of Cauchy sequences.

Edited by Nate T.
Link to post
Share on other sites

I don't know--what's the concept of method, besides a book I never read?

A 'concept of method' deals with how we deal with numbers. Numbers actually have their basis in reality. 2, 3, 4 each refer to a quantity which we can perceptually grasp. As we accumulate a greater quantity, at some point we can no longer identify the quantity by mere visual means. In a base 10 system, when we get to a quantity of 10, we move the column in which the notation is noted. 10 units is 1 grouping of 10 units, and no groupings of individual units. 10 groupings of 10 units we can express as 100 and so on. We understand the method that each column refers to a group of 10 units of the column to its right hand side. We understand the the method continues each time we accumulate 10 groups of the current column, we can keep track of it by adding an addition column to the left. As a concept of method, we understand how we can continue to add additional columns as required. The columns continues to refer to the relationship of the group relative to one of its members taken as, and serving as a unit. We understand the method of adding a column, and because we are comfortable with that understanding, believe we can drop the relationship to the unit in favor of continuing to add columns without reference to what the column refers to. This leads to the confusion that an infinity of notational columns is available for useage, by dropping the context of specifically what the column refers to. It is when this relationship or context is dropped, that the tie to perceptual reality is dropped, and renders the procedure as a 'concept of method' divorced from the referents which gave rise to it in the first place. Again, you can add zeros till you fall asleep or die, and still not exhaust the method, although the result can be so far removed from reality that it would appear to make the unimaginable, imaginable.

Link to post
Share on other sites

My posting is about the importance of definitions. We had a thread about .9999999999… which extended over 5 years – unnecessarily so. This happened, in part, because sometimes the concepts, terms and the notations used were not fully defined or explained.

Before discussing if .999… is or is not 1 (or equal to 1), we must specify and agree about what we do mean by this string of signs, the mysterious part being the three dots at the end.

The problem is that the notation ".999…" has no obvious meaning by itself, in contrast with 1, 2, 1000, … and (hopefully) also 1/9, 9/10, 999/1000 and other palpable numbers.

For example, one might interpret ".999…" as some kind of a process: one repeatedly adds, say every second, a "9" at the end. One gets:

.9, .99, .999, .9999 , ….., .99999999999, …..

These numbers approach 1 more and more closely every second; for example, the last written one differs from 1 only by .00000000001. However, these newly added numbers will never ever equal to 1.

Therefore, if we look at ".999…" as this kind of unending process, then, clearly,

.999… is not equal to 1

But the notation ".999…", and also ".(9)" or ".9999999 repeating", is used in mathematics with a completely different, and very precise, meaning. Namely, it is used to mean "the limit" of the following sequence of numbers:

.9, .99, .999, .9999 , …, .99999999999, …

The concept of limit is quite a technical one; it was explained on this forum a couple of times, but the details are not important. What is important, however, is that:

- the concept of "limit" can be defined quite rationally and consistently,

- and, when used, provides (generates, produces, results in) a specific number

Then, the notation ".999…", understood as the limit, is something specific, that is has a specific value, like 1, or maybe 5, 0, 1/2, 99/100, etc.

Now: using the technical definition of the concept of limit, one can show (it was already done on this forum) that this specific limit is equal to 1.

Therefore, if we understand at ".999…" as a limit, then, clearly,

.999… = 1

To avoid misunderstandings: the line above does not mean that the left hand side “tends” to 1, or “approaches” 1, or something like that. It means that it is exactly 1.

To summarize:

when we write .999… = 1, we have here both a definition and a theorem. The definition is about what we do mean by .999… (we mean the corresponding limit), while the theorem is about what this limit is (it is 1).

Alex

I don't understand your saying .999... is the limit. Is the limit of what? 1 is the limit of .999..., ok. But .999..., if it is a limit, is the limit of something else, what?

-- Mindy

Link to post
Share on other sites

One response is that this is just what the number 1 and the number .999... mean in the system of real numbers.

Is it technically correct notation to write 1 = 0.999... ? 1 is an integer not a real number. 1.0 is a real number. What would be comparing commensurate types is 1.0 = 0.999...

Link to post
Share on other sites
If the objection is that there is no such number, ontologically, then that's not an objection to the assertion that there is a real number which is equal to an infinite sum, since mathematics does not presuppose the ontological existence of its entities.
Nobody in their right mind things that numbers are entities. Ontological evasion, as practiced by a non-trivial subset of mathematicians, includes the refusal to recognize the fact that a "mathematical object" is purely the product of a certain algorithm, and that "=" is a technical mathematical symbol which, oddly, is fundamental yet undefined. (And clearly not axiomatic).
But if there is an objection to the use of real numbers, and real numbers just are defined as a construction out of rationals, and so on, then any objection has to be to the construction or the foundation of the construction.
Ah, well, no, an objection could be their anti-concept status (I do suggest reading up on "anti-concept" to understand exactly what that means). Recall that definitions follow identification of fact. So if a mathematical concept is posited arbitrarily (or without definition, or contrary to the cognitive -- "information-reducing" -- function of concepts) then it is objectionable. You're allowing the cart to precede the horse.

BTW, you say "For any two real numbers, a and b, if |a - b| < |e| for arbitrarily small |e|, then a = b." Some examples are a=2.3 and b=2.31. So |2.3-2.31| = .01 and for e=.02, which is an arbitrarily small value for e, |a - b| < |e|. Therefore 2.3=2.31. You're trying to give an "intuitive" proof, i.e. one that appeals to common sense understanding of terms, which just aren't applicable to mathematical concepts like "limits".

Link to post
Share on other sites

BTW, you say "For any two real numbers, a and b, if |a - b| < |e| for arbitrarily small |e|, then a = b." Some examples are a=2.3 and b=2.31. So |2.3-2.31| = .01 and for e=.02, which is an arbitrarily small value for e, |a - b| < |e|. Therefore 2.3=2.31. You're trying to give an "intuitive" proof, i.e. one that appeals to common sense understanding of terms, which just aren't applicable to mathematical concepts like "limits".

Just one objection. In mathematics, the meaning of "arbitrarily small" is "no matter how small you wish to make it." So, e=.02 is NOT arbitrarily small. For any given value you give for e, I can always, for example, divide it by two and get it smaller. It doesn't mean "come up with a number arbitrarily", which seems to be how you are using it. I don't want to dive into the larger conversation, but I did want to correct that point, as it was an error in usage of a mathematical term.

Link to post
Share on other sites
Just one objection. In mathematics, the meaning of "arbitrarily small" is "no matter how small you wish to make it." So, e=.02 is NOT arbitrarily small. For any given value you give for e, I can always, for example, divide it by two and get it smaller. It doesn't mean "come up with a number arbitrarily", which seems to be how you are using it. I don't want to dive into the larger conversation, but I did want to correct that point, as it was an error in usage of a mathematical term.
That was actually the point: that it's a complex method-concept in need of definition itself. Invoking the concept "arbitrarily small" cannot explain limits.
Link to post
Share on other sites

A 'concept of method' deals with how we deal with numbers. Numbers actually have their basis in reality. 2, 3, 4 each refer to a quantity which we can perceptually grasp. As we accumulate a greater quantity, at some point we can no longer identify the quantity by mere visual means. In a base 10 system, when we get to a quantity of 10, we move the column in which the notation is noted. 10 units is 1 grouping of 10 units, and no groupings of individual units. 10 groupings of 10 units we can express as 100 and so on. We understand the method that each column refers to a group of 10 units of the column to its right hand side. We understand the the method continues each time we accumulate 10 groups of the current column, we can keep track of it by adding an addition column to the left. As a concept of method, we understand how we can continue to add additional columns as required. The columns continues to refer to the relationship of the group relative to one of its members taken as, and serving as a unit. We understand the method of adding a column, and because we are comfortable with that understanding, believe we can drop the relationship to the unit in favor of continuing to add columns without reference to what the column refers to. This leads to the confusion that an infinity of notational columns is available for useage, by dropping the context of specifically what the column refers to. It is when this relationship or context is dropped, that the tie to perceptual reality is dropped, and renders the procedure as a 'concept of method' divorced from the referents which gave rise to it in the first place. Again, you can add zeros till you fall asleep or die, and still not exhaust the method, although the result can be so far removed from reality that it would appear to make the unimaginable, imaginable.

It seems that the concept of method claims that in mathematics we are still dealing with quantities of real-world things, and that's how my claim differs from it. I would agree that, at introductory stages of language education, when we speak of natural numbers, we are in some sense referring to a quantity of things. However, that's not what we do in mathematics at any stage. For instance, if it turned out that the universe were finite--as far as I know, there is no reason to suspect that it's finite or infinite in, say, the number of electrons that exist--that would constrain our initial notion of counting because there would be some very large quantity, beyond which no quantity would successfully refer.

Arithmetic would remain oblivious to this fact, since arithmetic assumes an infinity of natural numbers, and so within the system of arithmetic, every number is meaningful and no number attempts to refer at all. Instead, numbers are meant to represent. At the level of arithmetic, this representation is so near to what it attempts to represent, that it's hard to see the difference between the representation and the thing represented. But in arithmetic, we simply define a primitive element, which we call 0, and then define a function called the successor function, and define our domain as the closure of 0 and the successor function. From there we define notions of addition, multiplication, and limited subtraction and division. This is not meant to refer to the quantity of apples in some basket, or any quantity at all, though this is meant to behave in a way that mirrors counting--so that you may use this system in order to do your counting for you. The reason for having such a system is because, even if there is a largest quantity, we may never learn what it is or even care what it is. It is more desirable to simply have a system which is agnostic about there being a largest quantity, and which is effective under any hypothesis.

So in the end, arithmetic is nothing more than a linguistic practice which allows us to describe the world. When we group things into fives, count the groups, and multiply to obtain the quantity of things, we are using an abstract mathematical construct which behaves the same way that counting does. How we know that the behavior is the same is hard to spell out, but seems beside the point for this particular conversation.

Nobody in their right mind things that numbers are entities. Ontological evasion, as practiced by a non-trivial subset of mathematicians, includes the refusal to recognize the fact that a "mathematical object" is purely the product of a certain algorithm, and that "=" is a technical mathematical symbol which, oddly, is fundamental yet undefined.

"=" in most, if not all, mathematical domains, is defined as the set of all pairs, (x, x), or in logic, as a relationship between elements in a given domain which is reflexive, transitive, and symmetric.

Ah, well, no, an objection could be their anti-concept status (I do suggest reading up on "anti-concept" to understand exactly what that means). Recall that definitions follow identification of fact.

Definitions in mathematics are the stipulation of language rules in particular domain. As argued above, they are meant to mirror certain facts, but in themselves are supposed to retain a generality which is not wedded to any particular facts so that they may be applied to any domain where the facts warrant the use of a particular mathematical structure. Thus we do not talk about the real numbers which measure length versus the real numbers which measure velocity. We merely speak of all real numbers, which we may use to measure anything that is appropriately measurable by real numbers.

To clarify, this is the logical development of mathematics, though it is not the historical development. Naturally, our mathematical systems have been developed with real-world facts as the guiding principle for how we logically develop a given structure, but we often wish to generalize these structures in order to handle whatever comes our way.

BTW, you say "For any two real numbers, a and b, if |a - b| < |e| for arbitrarily small |e|, then a = b." Some examples are a=2.3 and b=2.31. So |2.3-2.31| = .01 and for e=.02, which is an arbitrarily small value for e, |a - b| < |e|. Therefore 2.3=2.31. You're trying to give an "intuitive" proof, i.e. one that appeals to common sense understanding of terms, which just aren't applicable to mathematical concepts like "limits".

The notion of some element a being arbitrarily small in this context is just that: For all x, a < x. As I said above, the principle that, for any two real numbers, they are the same iff their distance is arbitrarily small, is a consequence of the construction of real numbers by Dedekind cuts. For elaboration, I recommend the appendix of Rudin's Principles of Mathematical Analysis.

Obviously I am giving an intuitive proof, since the individual to whom I was responding specifically asked not to discuss the issue with too much abstraction, and the full construction by Dedekind cuts definitely qualifies as too much abstraction. So I rely on a willingness to accept something that is not too difficult to see about the real numbers. But I'm doing a little more than just an intuitive proof: I'm promising that, with some added mathematical sophistication, a rigorous version of the same proof exists, which can be built entirely from set theoretic foundations.

Link to post
Share on other sites

Oh, right, and about real numbers being anti-concepts, that would be an objection either to the fact that sets do not refer, i.e. an objection to set theory itself, or an objection to the particular set of sets (of sets of sets of sets of ...) that are identified with the real numbers, i.e. an objection to the construction. So yeah, any objection to the use of real numbers is an objection to one of these two. I prefer to cast all such objections this way, in order to identify the exact point in the development of modern analysis to which one is objecting.

shuriken.gif

Link to post
Share on other sites

After having read, and before answering to the comments above, let me formulate my first post in somewhat different terms, for more clarity, hopefully.

1. The following constructs (lines, sequences of signs, notations)

.999…

.(9)

.9999999 repeating

are not self-explanatory, at least not for this audience.

2. Therefore, when anyone wishes to affirm/utter something specific about these, he must first specify what meaning does he attribute (attach) to those lines.

3. I listed two possible meanings/interpretations for these notations;.

(a)

. the first was that they mean some ongoing process of adding additional 9s to the existing sequence of 9s. This is an interpretation to which seem to adhere some posters on the other thread, if one judges from their arguments for the (
justified
!) denial of the legitimacy of such statements as:

.999… = .(9) = . 9999999 repeating = 1

Please note that this interpretation is not a common and accepted one; I would qualify it as a popular misconception, or even as an urban legend :-)

(
B)

. The second interpretation, according to which those three lines are graphic/symbolic representations of the so called “limit” of the sequence

.9, .99, .999, .9999 , ….., .99999999999, ….. (2)

is adhered to by most mathematicians and, generally, by those who know and accept the concept of the limit of a sequence. The so called limit (if it exists) is a specific and unique number, associated with the given sequence and which is defined by the special property it has with respect to every one term of the sequence. Describing what this property is, would be off-topic for this Epistemology-related thread.

As already mentioned, the sequence above does have a limit, and that limit is 1.

To recapitulate the standard interpretation:

Definition #1: .9, .99, .999, .9999 , …, .99999999999, … is called a sequence

Definition #2: .999… means the limit of the above sequence, as do .(9) and .9999999 repeating

Theorem : The limit of the above sequence is 1

In the next post I will comment some contributions to this thread.

Sasha

Edited by AlexL
Link to post
Share on other sites

Schmarksvillian aptly corrected me, I should have wrote that arbitrary smallness of distance (or smallness of a non-negative value) is defined as, for all real x =/= 0, |a - b| < |x|.

No, that is NOT what I wrote in my private correction. (I don't know whether I'm allowed by the moderator to write the actual correction, but at least, for the record, I don't want the above formulation attributed to me).
Link to post
Share on other sites

Counting, and matching, and mathematics in general are methods we use to record the quantitative aspect of something. The goal of the method is only achieved if the method is followed. The method is followed only if each and every step is taken, in proper order, etc. Only after you have taken the final step does the result appear.

Division is such a method. A quotient is obtained if and only if the process is completed. However, mathematicians discovered there were division problems that could not be completed. What to do? There are two possibilities. One is to say that division isn't defined in those situations, just as we say division isn't defined when the divisor is 0.

The second tact is to mark those division problems with a special mark, so that the incompleteness of the method is taken into consideration when the quasi-result is used. That is how negative signs came into use, and how the "...", etc. notation for non-terminating decimals came to be. (It is also how each new type of number came to be defined, Whole, Integers, Rational, etc.)

That means that any non-terminating decimal is explicitly not a quotient. It is infinitely close to the quotient we "would" obtain, if one could be obtained, but one cannot be obtained. It is an approximation, as limits are. This is useful information, and it gets used extensively. But its usefullness requires its being interpreted as what it is, not as an un-marked number would be interpreted.

The marked number fails to be a number in many ways. It can't be multiplied except in a very few cases. What is .999... times 2? Since two times 9 is 18, there must be an eight in it, its right-most term, in fact. And, as the 8 ought to be the final digit, how do you indicate that? But if .999...were equal to 1, then .999... plus 1 ought to be 1.999..., which does not have an 8 in it. Now, in mathematics, if two terms are equal, they can be substituted for one another in arithmetical operations. But that is not the case with non-terminating quotients. They do not qualify as numbers.

The argument that says the distance between .999... and 1 on the number line is infinitesimably small, so that they are not separate, is mistaking infinitesimals.

Infinitesimals all have size.

There are infinite series that do, in fact, add to a finite number. Cut a quantity in half, repeatedly into infinity, then add it all back up, and the infinite series equals the original quantity, of course. But non-terminating decimals are not in this category.

Since non-terminating decimals fail to be meaningful designations of quantity, they must be assigned a numerical value. .999... gets assigned 1. But that is an assignment of a quantity to a symbol, it is not a mathematical result. .999... is merely deemed equal to 1, it was not found to be equal to it.

Take the same thing as it would take place in speech: "Antidisestablishmentarianisticallyistic..." You start off just like speech, and parts of your product are meaningful, and it takes the overall form of something that is meaningful. But it also breaks a rule of whatever actually is meaningful, by not terminating. It is inde-terminate. Unfinished. Not actual.

Non-terminating decimals are would-be quotients that fail to be numbers, but which contain information. We assign them numerical value by convention, and .999... is assigned the value 1. This assignment is often written, for convenience, "1 = .999..." This looks like a mathematical result, such as 2 + 2 = 4. It is not. That is the point of the "con-" arguments here. Strictly speaking, .999... is not equal to 1. It is replaced by the quantity 1 in certain mathematical circumstances. And that is no problem.

-- Mindy

Edited by Mindy
Link to post
Share on other sites

Not to rehash what had been done in the other thread, but Mindy,

What does "infinitely close" and "infinitesimally small" mean? Also, how can any reasonable concept associated to something labeled 0.999... have a right-most digit? While we're at it, why doesn't your argument about the "acceptable" infinite series formed by repeatedly halving things "to infinity" and re-summing not work for 0.999...? After all, you're just repeatedly taking a tenth of a unit into infinity and adding it back together.

I can peform the operations you say can't be done to 0.999... quite well, so I don't see why you say it isn't a number.

Link to post
Share on other sites

Due to the constructibility of complex numbers from real numbers, real numbers from rationals, ...

I am not sure how this relates to my post, but please note the in the case with .999... one never leaves the domain of rational numbers: all terms in the sequence are rational, at least some of the definition of the concept of limit are valid for rationals, the proof that the limit is 1 uses only rational numbers, and the result is also rational. Speaking of non-rational numbers (irational, transcendent or complex) is in this case an overkill.

From another of your posts:

... real number which is equal to an infinite sum...

This is a very loose language. Nothing could be equal to an infinite sum, because there are no infinite anything, including sums. There are sequences of partial sums, which can have a limit. Something can be equal to that limit.

..Just because ... you can continue to add another 9 to the sequence, suggests that you can do so for a very long time (at least until you fall asleep, or if you are very persistent, until you pass away) without exhausting what the '...' implies.

The "..." means the limit of that sequence, which is 1, and there is nothing to exaust. You implicitly adhere to the process interpretation.

When you can no longer perceive the unit, even with the aide of sophisticated instruments, then you are merely invoking the method without any correspondence to reality, which is to say that it is no longer meaningful, or defined.

I do make a distinction between concepts and their practical application. I do not deduce the former from the latter, i.o.w. I am not an Operationalist.

...Folks asserting that 1 is 0.999... - regardless of context - are simply making a nonsensical assertion, since "0.999..." has a very specific context in which it makes sense (as you've explained), while "1" refers to different concepts with their own applicable contexts.

If by "context" you in fact mean interpretation, then I agree. If 0.999... means the limiting value of a specific sequence, and that limiting value is 1, then 0.999... is 1. In fact, 1 and 0.999... are different graphical representations of the same mathematical object. Moreover, almost any number has more than one representation. For example, 3.176 is the same as 3.175999... (where "..." indicate, as always, the limit of the corresponding sequence).

How can it possibly be said that .999999999999..." = 1, when 1 is a full unit and .9999999999999..." is not a full unit?

I have no idea if it can or not be said, because I have no idea about what do you mean by ".999999999999...", because you don't explain.

it should be obvious that 1 - 9/10 - 9/100 - ... is smaller than any positive real number

Do those "..." mean the limit of the sequence with more and more similar terms? Or something else? Anyway, it is an exaggeration to call this statement "obvious" :-) Besides, -5 is also smaller that any positive real number :-)

I don't understand your saying .999... is the limit. Is the limit of what? 1 is the limit of .999..., ok. But .999..., if it is a limit, is the limit of something else, what?

".999..." means, by convention, the limit (or the limiting value, which is 1) of the following sequence of numbers:

.9, .99, .999, .9999 , ….., .99999999999, …

Then the phrase "1 is the limit of .999..." doesn't make sense.

Is it technically correct notation to write 1 = 0.999... ? 1 is an integer not a real number. 1.0 is a real number. What would be comparing commensurate types is 1.0 = 0.999...

An integer number is at the same time a real number. In a purely mathematical context, 1.0 is the same object as 1. There are contexts in which they are interpreted somewhat differently, for example as results of a length measurement, where the number of decimals conveys some information about the precision of the measurement, even if the decimals are all 0s. Also, in computer programming, internally some compilers treat 1 and 1.0 differently.

... What is .999... times 2? ...

But first: what do you mean by .999...?? Only after you decide what do you mean by it, will you be able:

- to decide if it does make sense at all to multiply it by 2

- and what the result might be.

Sasha

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    No registered users viewing this page.


×
×
  • Create New...